Dataset Viewer
text
stringclasses 1
value |
|---|
Get Started To get started with the Ori Bare Metal service:
Step 1: Navigate to the Supercomputer cluster Once we sign-up to the Ori platform, begin by landing to our platform page, where you can see one of Ori's product offerings Supercomputer .
Step 2: Request a New Server To set up a new bare metal server, follow these steps:
Begin by selecting the New Server option.
Select a configuration that meets your requirements. Our available configurations include 8xH100 SXM/PCIe and 8xL40S GPUs.
Finalize your setup by clicking the Submit button to send your request.
Step 3: Obtain Access We will contact you to confirm your request and provide you with access details. We will need your ssh key for access. See our guide for setting up SSH keys (../virtual-machines/sshkeys.md).
Step 4: Management Support Our team of trained infrastructure engineers will help you get going and support you all the way during your project. Thank you for choosing Ori Global Cloud's GPU Metal service. We are excited to empower your high-performance projects with our dedicated bare metal solutions.
"label": "Bare Metal", "position": 7, "link": "type": "doc", "id": "introduction"
Installing NVIDIA Drivers (SXM)
Introduction NVIDIA SXM GPUs are specialized for high-performance computing and deep learning applications, offering superior performance and efficiency compared to standard PCIe GPUs. Installation Steps for Nvidia Cuda 12.8.1 (Nvidia driver 570) on H200SXM
Ubuntu 22.04 Verification
After reboot, reconnect to the SC cluster, and verify the driver is installed and its version. Expected output:
Overview Ori Supercomputers offers direct access to high-performance bare metal resources. Under private access, Supercomputers can dynamically provision a cluster composed of several CPU and GPU nodes, interconnected via Infiniband with shared storage.
Bare Metal Service Features Direct Hardware Access: Experience the raw power of physical GPUs, CPUs, and storage without any virtualization overhead. High-Performance Computing: Tailored for intensive workloads such as ML model training, data analytics, and scientific computations, our Bare Metal service delivers the computational might necessary to process large datasets and complex algorithms efficiently. Customizable Configurations: Select from a range of hardware specifications to meet the unique demands of your projects. Customize your setup to achieve the optimal balance of processing power, memory, and storage. Private Cloud Security: Benefit from the heightened security of a private cloud environment. With Bare Metal, your resources are isolated, providing a secure space for sensitive computations and data storage. Full Control and Flexibility: Enjoy complete control over your hardware, with the flexibility to configure the operating system, software stacks, and network settings to your exact specifications.
Ideal Use Cases The Bare Metal service is ideal for organizations and users who require:
A dedicated environment for compliance-sensitive ML/AI workloads.
Full customization and control over their computing infrastructure.
Elimination of noisy neighbor issues common in shared environments.
The ability to handle large-scale, performance-critical tasks without compromise.
"label": "Developer Tools (CLI)", "position": 8, "link": "type": "doc", "id": "introduction"
"label": "Commands", "position": 5, "link": "type": "doc", "id": "introduction"
ogc CLI tool for Ori Global Cloud
Options SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC ogc model (ogc model.md) - Manage models in OGC ogc ssh (ogc ssh.md) - Manage SSH key in OGC ogc version (ogc version.md) - CLI version ogc vm (ogc vm.md) - Manage VMs in OGC
ogc bucket Manage Buckets in OGC
Options Options inherited from parent commands SEE ALSO ogc (ogc.md) - CLI tool for Ori Global Cloud ogc bucket create (ogc bucket create.md) - Create a new bucket ogc bucket delete (ogc bucket delete.md) - Delete a bucket ogc bucket get (ogc bucket get.md) - Get bucket details ogc bucket list (ogc bucket list.md) - List of buckets ogc bucket list-locations (ogc bucket list-locations.md) - List of bucket locations ogc bucket list-types (ogc bucket list-types.md) - List of bucket types ogc bucket reset-key (ogc bucket reset-key.md) - Reset bucket access credentials
ogc bucket create Create a new bucket Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc bucket delete Delete a bucket Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc bucket get Get bucket details Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc bucket list-locations List of bucket locations Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc bucket list-types List of bucket types Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc bucket list List of buckets Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc bucket reset-key Reset bucket access credentials Examples Options Options inherited from parent commands SEE ALSO ogc bucket (ogc bucket.md) - Manage Buckets in OGC
ogc k8s Manage k8s clusters in OGC
Options Options inherited from parent commands SEE ALSO ogc (ogc.md) - CLI tool for Ori Global Cloud ogc k8s create (ogc k8s create.md) - Creates a new k8s cluster ogc k8s delete (ogc k8s delete.md) - Deletes a K8s cluster ogc k8s get (ogc k8s get.md) - Get k8s cluster ogc k8s get-config (ogc k8s get-config.md) - Get the KubeConfig file ogc k8s list (ogc k8s list.md) - List of k8s clusters ogc k8s list-locations (ogc k8s list-locations.md) - List k8s cluster locations ogc k8s resume (ogc k8s resume.md) - Resumes a k8s cluster ogc k8s suspend (ogc k8s suspend.md) - Suspends a k8s cluster
ogc k8s create Creates a new k8s cluster Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s delete Deletes a K8s cluster Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s get-config Get the KubeConfig file Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s get Get k8s cluster Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s list-locations List k8s cluster locations Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s list List of k8s clusters Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s resume Resumes a k8s cluster Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc k8s suspend Suspends a k8s cluster Examples Options Options inherited from parent commands SEE ALSO ogc k8s (ogc k8s.md) - Manage k8s clusters in OGC
ogc model Manage models in OGC
Options Options inherited from parent commands SEE ALSO ogc (ogc.md) - CLI tool for Ori Global Cloud ogc model upload (ogc model upload.md) - Uploads a new Model Version
ogc model upload Uploads a new Model Version Examples Options Options inherited from parent commands SEE ALSO ogc model (ogc model.md) - Manage models in OGC
ogc ssh Manage SSH key in OGC
Options Options inherited from parent commands SEE ALSO ogc (ogc.md) - CLI tool for Ori Global Cloud ogc ssh create (ogc ssh create.md) - Creates a new SSH key ogc ssh delete (ogc ssh delete.md) - Deletes a SSH key ogc ssh list (ogc ssh list.md) - List SSH keys
ogc ssh create Creates a new SSH key Examples Options Options inherited from parent commands SEE ALSO ogc ssh (ogc ssh.md) - Manage SSH key in OGC
ogc ssh delete Deletes a SSH key Examples Options Options inherited from parent commands SEE ALSO ogc ssh (ogc ssh.md) - Manage SSH key in OGC
ogc ssh list List SSH keys Examples Options Options inherited from parent commands SEE ALSO ogc ssh (ogc ssh.md) - Manage SSH key in OGC
ogc version CLI version Options Options inherited from parent commands SEE ALSO ogc (ogc.md) - CLI tool for Ori Global Cloud
ogc vm Manage VMs in OGC
Options Options inherited from parent commands SEE ALSO ogc (ogc.md) - CLI tool for Ori Global Cloud ogc vm create (ogc vm create.md) - Creates a new VM ogc vm delete (ogc vm delete.md) - Deletes a VM ogc vm get (ogc vm get.md) - Get VM ogc vm list (ogc vm list.md) - List of VMs ogc vm list-locations (ogc vm list-locations.md) - List VM locations ogc vm list-os (ogc vm list-os.md) - List VM OS Images ogc vm list-sku (ogc vm list-sku.md) - List VM SKUs ogc vm restart (ogc vm restart.md) - Restarts a VM ogc vm resume (ogc vm resume.md) - Resumes a VM ogc vm suspend (ogc vm suspend.md) - Suspends a VM
ogc vm create Creates a new VM Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm delete Deletes a VM Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm get Get VM Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm list-locations List VM locations Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm list-os List VM OS Images Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm list-sku List VM SKUs Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm list List of VMs Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm restart Restarts a VM Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm resume Resumes a VM Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
ogc vm suspend Suspends a VM Examples Options Options inherited from parent commands SEE ALSO ogc vm (ogc vm.md) - Manage VMs in OGC
Working with Virtual Machines In this section, we'll go over all the necessary steps needed to provision and manage Virtual Machines on OGC. Let's look at a few example commands and their output.
List available machines Before creating a VM, you first need to decide what machine characteristics you need as well as a few other details. The command below offers a view of what machines are currently available through OGC. To get more information on each SKU, view Quota page on OGC (under Settings Resource Quotas) Create SSH Keys In order to create a Virtual Machine, you first need to create an SSH key pair (docs/virtual-machines/sshkeys.md) which will allow you to access the instance. After creating your SSH key pair, you will use the public key to create an OGC SSH key resource with the following command. Create Virtual Machine Once we have decided the machine type, location, OS image as well as created the SSH key, we are now ready to create a
Virtual Machine. The command below is an example of how to create an L40s machine :::info
This will create a compute resource that will incur cost.
::: Manage Virtual Machine Once a machine is Available, you can perform administrative actions on the machine (you can read more about VM actions here (docs/virtual-machines/actions.md))
Installation
MacOS and Linux To install the Ori CLI, run the following installer. You now need to configure an environment variable. :::info
To get an API key, navigate Settings, then to API Tokens Tab, and click Add to get a new API Token.
::: :::info
If you have a problem with the package installation you may be missing required packages such as https://jqlang.github.io/jq/ (https://jqlang.github.io/jq/).
:::
Windows Go to the download page (https://github.com/ori-edge/ogc-cli-releases/releases/tag/latest) and download the version that is appropriate for your platform. Unzip the file into your main user directory. Now set the environment variable: :::info
To get an API key, navigate Settings, then to API Tokens Tab, and click Add to get a new API Token.
:::
Verifying your installation
That's it you are now ready to use the CLI. To try it out run: If the installation was successful, you should obtain a list of SKUs available.
Overview Welcome to the Command Line Interface (CLI) section of the Ori Global Cloud Documentation! Here, you'll discover the power and flexibility of Ori's CLI, a command-driven interface that empowers you to interact with our cloud platform efficiently and effectively. Whether you're a developer, sysadmin, or a DevOps engineer looking to harness the full potential of Ori, our CLI provides a streamlined and scriptable way to manage your cloud resources, deploy applications and automate tasks. In this documentation, we'll delve into the intricacies of Ori's CLI, guiding you through its installation, practical examples and detailed documentation for each command. From configuring your environment to provisioning resources, monitoring availability and orchestrating complex workflows, you'll find comprehensive insights and examples that empower you to make the most of your Ori experience. Whether you're a command-line aficionado or new to the world of CLI-driven cloud management, our documentation is designed to be your trusted companion. It's time to unlock the efficiency and control that the Ori CLI brings to your cloud operations. Let's embark on this journey together as we explore the commands and capabilities that will help you achieve your cloud computing goals with ease.
"label": "Endpoints", "position": 4, "link": "type": "doc", "id": "overview"
Auto Scaling Auto Scaling ensures that your endpoint can handle varying loads efficiently, adjusting the number of replicas between the min/max number set for the Endpoint. Auto Scaling will be triggered based on concurrent requests you send to the endpoint and GPU utilisation. :::info
Auto Scaling can take some time to provision/de-provision replicas.
:::
Zero to Scale Policy The Scale to Zero policy allows the Endpoint to go to zero replicas after a certain period with no incoming requests.
Billing Endpoints are billed per-minute and based on the GPU resource being used. You can read more about billing in our Support (../support/overview.md) section.
Get Started
Step 1: Create an endpoint Select a Model: Choose from our list of popular, pre-trained models. Configure Compute Resources: GPU Selection: Select the GPU type based on your performance and cost requirements. OGC might provide a recommended GPU that ensures the optimal cost/performance for the selected model.
Location: Choose the geographical location where your endpoint will be hosted. Autoscaling (Optional): Replicas: Define the minimum and maximum number of replicas for your endpoint to scale between.
Zero to Scale Policy: If the minimum replicas are set to Zero, your endpoint will scale down to zero replicas based on the defined policy.
Step 2: Accessing your Endpoints After creating, you are provided with an Authorization Token. When accessing inference endpoints, an authorization token is required to ensure secure access through the Authorization Header. :::info
Authentication Token is only shown once upon creation. You can get a new one under Request Access Token option for your endpoint. ::: You'll be able to see a sample cURL command under the endpoint details. This is an example for a Llama 3.1 8B model:
Overview OGC Inference Endpoints is the easiest way to deploy machine learning models as scalable API endpoints. It provides you with the flexibility to choose from a variety of pre-trained models, specify your compute requirements, and deploy them in specific locations to minimize latency and optimize performance. It allows you two select two types of models: Pre-trained models - Choose from Llama 3.1, Llama 3.2, Mistral, Qwen 2.5, many more. Custom Models - Coming soon.
"label": "Fine-Tuning", "position": 5, "link": "type": "doc", "id": "introduction"
Billing
Overview of Billing Components Fine-tuning jobs are billed based on the number of tokens processed and the model size tier.
Token-Based Pricing Charged per 1M tokens, calculated as the total tokens processed during training (sum across all epochs), plus tokens used in the optional validation dataset.
Model Size Tier Pricing is grouped by model size. For example, models up to 10B parameters share the same token rate. :::note
All fine-tuning uses LoRA, enabling faster training at lower compute cost.
:::
Example Usage Billing Let s say you re fine-tuning a model with 8B parameters, which falls under OGC s up to 10B pricing tier and billed at 0.00000046 per token (i.e., 0.46 per 1M tokens).
Your training dataset contains 5 million tokens
You run 3 epochs So total processed 5M 3 15 million tokens
You also include a validation set with 1 million tokens Total tokens billed: 15M (training) 1M (validation) 16 million tokens
Billing Calculation:
16M tokens 0.00000046 7.36 Your total cost for this fine-tuning job would be 7.36
Dataset Requirements The structure of the input datasets you want to use for training is essential for training performant models. Here is a guide for preparing and formatting your datasets for fine-tuning Large Language Models (LLMs) using OGC's Fine-Tuning Service.
Supported Formats Currently OGC supports Hugging Face datasets and Custom datasets which can be your own datasets with the following formats:
.CSV
.JSONL Dataset Structure Your dataset must be structured to contain training examples that help the LLM learn the desired behavior. Each training example should include both an input (prompt/instruction/question) and the expected output (response/answer). Required Structure Options: Option 1: Single Column Format Your dataset should have one column containing the complete training text . This column can have any name (commonly "text"). Example: Important: When using single-column format, you must include special tokens in your text to clearly separate the instruction from the response. The exact tokens depend on your chosen base model (see Best Practices section below).
Option 2: Two Column Format Your dataset should have two separate columns with these exact names: "prompt" column: Contains prompts, instructions, or questions "completion" column: Contains the desired responses or answers Example: marks the beginning of a new sequence. INST and /INST delimit simple prompts and responses. marks the end of a sequence.
Datasets containing two columns named "input" and "output". For example: For two-column datasets with "input" and "output" columns, no special tokens are required. The system will automatically handle the formatting during processing.
-- Validation Dataset Used to evaluate model performance during training and prevent overfitting. Should follow the same format as the training dataset (.CSV or .JSONL). User-Provided Validation Dataset: You can directly upload a separate validation dataset through the Ori interface. Best Practices for Preparing Your Dataset Consistency: Keep prompt and response formatting consistent across all data points. Clear Delimitation: To obtain good quality results, always use the special tokens specific to the base model you're using to ensure clear separation between prompts and completions. For example, if you're using the Qwen/Qwen2.5-1.5B base model, you should use the structure: Balanced Dataset: Aim for a varied and balanced dataset to enhance the model's generalization capabilities. Following these guidelines ensures your fine-tuning process will yield the best results, providing accurate, relevant, and context-aware responses from your model.
Get Started This guide will help you get started with Fine-Tuning models. Our Fine-Tuning feature enables you to train popular LLMs on any publicly available dataset hosted on Hugging Face as well as custom datasets. Follow the steps below to launch and manage a fine-tuning job.
Step 1: Select Base Model You can choose from a given set of Open Source base models to fine tune. Step 2: Add Dataset You could either specify the path of the Hugging Face dataset path you d like to use for fine-tuning. Or, bring your own Dataset under the Custom tab. There are two types of datasets: Training : Dataset to train your model against Validation (optional): Dataset to validate the model against unseen data that is outside of the training loop. Requirements Source HuggingFace dataset: Full path (example imdatta0/ultrachat 1k)
Custom Dataset: You can also choose your uploaded dataset Step 3: Output Job Details Provide the following:
Suffix: A custom identifier added to your Job ID for easier tracking.
Registry: Choose the model registry where the fine-tuned model will be saved for future deployment.
Seed: Set a random seed for reproducibility (defaults to 42).
Step 4: Configure Hyperparameters Each hyperparameter comes with a default value, which you can adjust: Step 5: Set LoRA Parameters Step 6: Launch the Job Once a job is launched, it will automatically start the training run. These runs can vary in time, depending on the configurations set. You can track the progress of your job by monitoring its status as it transitions from Initializing to Running to Completed.
Step 8: Post-Job Artifacts Once the job is completed:
Navigate to the job's details view to view the list of Checkpoints created for epochs.
For each checkpoint, you ll see: Training Loss: The error (or loss) the model incurred while learning from the training dataset. A decreasing training loss generally indicates the model is learning, but if it s very low compared to validation loss, it might be overfitting. Validation Loss: Visible only if a validation dataset is provided. The error computed using the validation dataset (unseen during training) is an indicator of how well the model will perform on real-world or unseen data.
From the selected checkpoint, you can either deploy the fine tuned model by:
Navigating to the Model Registry where the model weights becomes Available in your chosen Registry and location, or
Deploy the model directly to an Endpoint. Registered model weights appear under Model Registry Model Versions . Know more about Model Registry (/docs/modelregistry/overview.md). From here, you can deploy the fine-tuned model to Endpoints for inference.
Overview This guide provides an overview of the model training process used by OGC's Fine-Tuning Service, highlighting the key aspects and configurations involved in fine-tuning Large Language Models (LLMs).
Understanding Fine-Tuning Fine-tuning allows you to adapt a pre-trained Large Language Model to your specific use case, enhancing its performance for particular tasks or domains without retraining the entire model from scratch.
Efficient Fine-Tuning with LoRA OGC uses a technique called Low-Rank Adaptation (LoRA) for fine-tuning, which: Significantly reduces computational resources and memory requirements.
Maintains or enhances model performance compared to traditional fine-tuning methods.
Trains a minimal number of parameters efficiently, so it only updates a small subset of the model's parameters while keeping the rest of parameters frozen, thus speeding up the fine-tuning process.
Training Configuration When launching your fine-tuning job through OGC's user interface, you'll have control over several key training parameters: Batch Size: Adjust batch sizes to fit within available GPU memory.
A larger batch size can lead to a faster training, but with the risk of being stuck in a local minima, so model convergence might be delayed. Number of Training Epochs: Set the number of forward and backward passes where each batch is processed once to calculate the loss and gradients. Evaluation Metrics and Early Stopping: Automatic evaluation during training helps select the best-performing model. Learning Rate: Setting up a base learning rate is highly effective because it's memory-efficient and adapts the learning rate dynamically for each parameter. LoRA (Low-Rank Adaptation) is a technique that significantly reduces the number of trainable parameters, making fine-tuning faster and more memory-efficient. Key LoRA parameters that you can configure are: LoRA Rank (r) : This determines the dimensionality of the LoRA matrices. A smaller rank means fewer trainable parameters and faster training, but potentially lower model performance. A common range for rank is 4 to 16. LoRA Alpha (alpha) : This is a scaling factor for the LoRA weights. It's typically set to be the same as the rank or twice the rank. It controls how much the LoRA adaptation influences the original model's weights. LoRA Dropout (dropout) : This is a regularization technique applied to the LoRA layers during training to prevent overfitting. It randomly sets a fraction of the LoRA activations to zero. A common value is 0.1. Monitoring and Managing Training Jobs OGC's Fine-Tuning provides intuitive tools to monitor your fine-tuning jobs, enabling you to:
Access detailed logs on the training and validationn loss to understand model performance.
Manage checkpoints effectively by always evaluating them on the validation loss, so the best version of your model is always logged. Advantages of Fine-Tuning on OGC
One click fine tune
Job management
Custom dataset / HF integration
Integration with OGC's Model Registry Inference Endpoint
Fine-Tuning use cases:
Customer Support Chatbot Fine-tune on historical tickets and help center content for automated support.
Product Q A Assistant Train on catalogs and reviews to answer product related queries.
Legal Document Analyzer Fine-tune on contracts to extract clauses, obligations, and risks.
Codebase Assistant Train on internal code and documentation to support developer queries.
Clinical Trial Chatbot Fine-tune on trial protocols and FAQs for real-time investigator assistance.
Manufacturing Defect Classifier Train vision models on proprietary defect images for quality control.
"label": "Kubernetes", "position": 3, "link": "type": "doc", "id": "introduction"
Actions OGC Serverless Kubernetes allows you to Suspend and Resume clusters to manage costs or reduce resource usage during periods of low activity.
Suspend Suspending a cluster involves halting all pods and scaling down the resources to the minimum, effectively putting the cluster in a "sleep" mode. This is stop all billing as no pods (and therefore nodes) will be allocated to the cluster. :::warning
When a cluster goes into Suspended state, it will purge all active pods. Pods that are not managed by a deployment or stateful set will not be restarted.
:::
Resume Resuming a cluster will re-enable all the deployments and allow nodes to be allocated to the cluster.
Billing
Overview of Billing Components Serverless Kubernetes is billed per-minute and based on the resources your pods use. Here are the key components of our billing model: GPU: Each GPU used in your pods is billed at a fixed rate. vCPU: Every vCPU used by your pods is billed. If your pods use CPU resources without a GPU, these are billed separately. CPU usage is billed at 1/100th the cost of a vCPU. Memory: Memory usage is billed per MB. This allows for granular control over the memory allocation and cost. Load Balancer: Running Load balancer on the cluster.
Billing for Suspended/Stopped/Idle Resources Pods are only billed when they are actively running. If a pod is stopped, suspended, or idle, you will not be charged for those resources during that time. When a cluster is Suspended, all pods are stopped and stop being billed for. The exception to this are load balancers, as the IP address is persisted during the time the cluster is suspended.
Resource Pricing The following tables outlines the pricing per hour/per minute for: 1. GPU Resources: 2. Other Resources: You can read more about billing in our Support (../support/overview.md) section.
"label": "Examples", "position": 7, "link": "type": "doc", "id": "Introduction"
KEDA Autoscaling for an Ollama Application This tutorial will walk you through the setup of KEDA-based autoscaling for an Ollama deployment using GPU usage as a metric. We'll cover the following:
Installing KEDA using Helm
Creating and Deploying the GPU Usage Metrics API
Creating a Service for the Metrics API
Configuring the KEDA ScaledObject for Autoscaling
Deploying Ollama with Autoscaling Enabled
Step 1: Installing KEDA with Helm Firstly, install KEDA on your Kubernetes cluster using Helm: Verify the installation: You should see KEDA operator pods running.
Step 2: Creating and Deploying the GPU Usage Metrics API Firstly, create a namespace where you'll apply all the manifests: We'll create a FastAPI application that provides a simple endpoint returning GPU usage metrics, which will be used by KEDA for scaling decisions. However, you can create any custom metric application to trigger the KEDA autoscaling. And here is the Dockerfile used to containerise the application: Next, build and push the Docker image: Step 3: Deploying the GPU Usage Metrics API in Kubernetes Now, create the Kubernetes resources to deploy this Metrics API, gpu-usage-deployment.yaml: And expose as the gpu-usage-service.yaml service: Apply the deployment and service manifests: Step 4: Configuring the KEDA ScaledObject for Autoscaling Once the Metrics API is deployed, you can configure KEDA to autoscale the Ollama deployment based on the GPU usage. Create a scaled object file, gpu-usage-scaledobject.yaml: Now apply the scaled object manifest: Step 5: Deploying Ollama You should already have your Ollama deployment ready. Here s an example manifest for Ollama, ollama-deployment.yaml: Now apply the Ollama deployment manifest: Step 6: Monitoring Autoscaling You can monitor the autoscaling behavior of your deployment and check that KEDA is set up correctly: The above command checks the status of the deployed KEDA scaled object, and you should see the following similar output: READY: True indicates that the scaled object is correctly configured and ready to make scaling decisions.
ACTIVE: True indicates that scaling is currently active (based on the current GPU usage metrics).
FALLBACK: False means KEDA is not relying on the fallback configuration and is successfully fetching metrics. If you see ACTIVE as False, it may indicate that the metrics endpoint hasn't reached the scaling threshold yet. Now check if the pods are autoscalling. If you remember, we deployed a single pod, and now you should see multiple pods running, but not more than 10 as defined in the manifest scaled object. Check the Ollama deployment pods: This command monitors the state of the Ollama pods as KEDA autoscaling triggers the scaling actions. The expected output will show the number of replicas scaling up or down based on GPU usage: As scaling occurs, you will observe new pods being created (Pending to Running) or pods being removed (Terminating) in real-time. In this tutorial, you've learned how to set up KEDA-based autoscaling for the Ollama deployment using GPU usage as the metric. We covered the installation of KEDA, the deployment of a metrics API, the creation of a ScaledObject, and the configuration for autoscaling based on GPU usage. Feel free to adjust the threshold values and configurations to suit your requirements.
LLM Application (Ollama Open WebUI) Welcome to the guide on how to build and deploy large language model (LLM) applications using our serverless Kubernetes platform. This document provides a simple walkthrough for deploying an LLM application using a combination of Ollama and OpenWebUI, and leveraging ready-to-use Docker containers. By the end of this guide, you'll have a scalable LLM application up and running on OGC. ! OGC Assistant (../../../static/img/OpenWebUI example.JPG)
Prerequisites Before you start, ensure you have the following:
A registered account with Ori Global Cloud.
Docker installed on your local machine.
kubectl CLI tool installed on your local machine. This tool is necessary to communicate with your Kubernetes cluster. A Kubernetes cluster created and configured according to the Ori Kubernetes Get Started Guide (https://docs.ori.co/kubernetes/get-started).
Step 1: Create an Entrypoint Script with ConfigMap To automate the startup of the Ollama service and the pulling of the Llama model, we will create a custom entrypoint script. This script will be stored in a ConfigMap and mounted into the Ollama container. Create a file named entrypoint-configmap.yaml with the following content: This script starts the Ollama service, pulls the Llama 3.1 model, and ensures that the service remains running. Apply ConfigMap to your Kubernetes cluster: Step 2: Create Deployments To deploy your application, you'll need to create Kubernetes deployment manifests for both the Ollama and OpenWebUI services. These manifests define the desired state of your application, including the containers to run, the ports to expose, the persistent volume and the entrypoint script.
Ollama Deployment Create a file named ollama-deployment.yaml with the following content: This manifest specifies that the Ollama service will use the ollama/ollama:latest ready-to-use Docker image and expose port 80, and specifies a single L40S GPU to be used, and also specifies a PV. You can access the image here (https://hub.docker.com/r/ollama/ollama). Apply the manifest to deploy Kubernetes cluster: OpenWebUI Deployment Next, create a file named openwebui-deployment.yaml: This manifest specifies that the OpenWebUI service will use the ghcr.io/open-webui/open-webui:main Docker image, expose port 8080, and connect to the Ollama service via an environment variable. You can find more information about OpenWebUI here (https://github.com/open-webui/open-webui?tab readme-ov-file). Now deploy OpenWebUI to your Kubernetes cluster: Step 3: Create Services To make the Ollama and OpenWebUI deployments accessible within and outside the Kubernetes cluster, you need to create service manifests. These services route traffic to the appropriate pods, allowing communication between different parts of your application and external clients.
Ollama Service Create a file named ollama-service.yaml with the following content: This manifest instructs the Ollama service to be exposed via a load balancer. The service listens on port 80 externally and forwards traffic to port 11434 on the Ollama pod. Expose the Ollama service: OpenWebUI Service Next, create a file named openwebui-service.yaml with the following content: The OpenWebUI service is also exposed via a load balancer. The service listens on port 8080 externally and forwards traffic to the same port on the OpenWebUI pod. Expose the OpenWebUI service: Step 4: Verify Deployments After deploying the Ollama and OpenWebUI services, it's important to verify that everything is running as expected. This step will guide you through checking the status of your deployments and services within the Kubernetes cluster.
Check Pod Status Use the following command to check the status of the pods: You should see the Ollama and OpenWebUI pods listed with a status of Running.
Step 5: Access the Ollama Service With the Ollama service deployed and verified, the next step is to access the service to ensure it's working properly. This involves retrieving the external IP address of the service and interacting with it directly.
Retrieve the External IP of the Ollama Service To get the external IP address assigned to the Ollama service, use the following command: This command will return details about the ollama-service, including its external IP address. You should see output similar to this: Note the value under EXTERNAL-IP. This is the IP address you can use to access the Ollama service. You can load this IP in your web browser to check if the service is running. You should see a message saying: Ollama is running.
Step 6: Access OpenWebUI Similarly to the Ollama service, copy the external IP of the OpenWebUI service: Load this IP in your web browser. You'll be taken to the OpenWebUI interface.
Step 7: Customise OpenWebUI and Deploy Your Chatbot OpenWebUI provides a ChatGPT-like interface with an integrated RAG for interacting with your models. Open Follow these steps to customise your chatbot:
Model Selection
Check if the Llama 3.1 8B model is available for selection.
Customisation
Use the OpenWebUI interface to create a custom chatbot: Naming : Give your chatbot a name. Upload Documents : Upload specific documents that provide context for your model. Set Prompt Template : Define a prompt template to control the tone of responses and restrict the chatbot to answer questions based on the uploaded documents.
Step 9 (Optional): Scale Your Deployment Using OGC s serverless Kubernetes, scaling your LLM-based application is straightforward. Adjust the replicas field in your deployment manifests to increase or decrease the number of instances running: To speed up your inference even further, you can also easly increase the number of even more powerfull GPUs in the Ollama deployment manifest file. Note: You are not restricted to using one type of Llama model, so you can pull any model supported by Ollama. Check the list of available models here (https://github.com/ollama/ollama)
Step 10 (Optional): Use Persistent Storage for the Model To prevent the model from being pulled every time the pod restarts, we can use a Persistent Volume (PV) and Persistent Volume Claim (PVC) to store the model persistently. This way, the model is only pulled once, and subsequent pod restarts will use the already downloaded model. Define the pv-pvc.yaml file: And apply the PV and PVC configuration: Congratulations! You have successfully deployed an LLM-based application using Ollama and OpenWebUI on Ori Global Cloud's serverless Kubernetes. This powerful combination allows you to build and scale large language model applications efficiently without needing to manage complex infrastructure. For further customisation and scaling, refer to OGC s comprehensive documentation and support services.
Running the LLaMA 3.1 405B Model in Kubernetes using Ollama and OpenWebUI This tutorial will guide you through deploying the LLaMA 3.1 405B model in a Kubernetes cluster using the Ollama service and OpenWebUI. The LLaMA 3.1 405B model is a large language model with 405 billion parameters, optimised for various tasks using 4-bit quantisation. We will deploy Ollama and OpenWebUI in the llama405b namespace, leveraging H100SXM-80 GPUs to handle the heavy computation requirements of the model.
Prerequisites
A Kubernetes cluster with H100SXM-80 GPU nodes available.
A configured kubectl to manage your Kubernetes cluster.
Permissions to create namespaces, deployments, and services in the cluster.
Ollama docker image: ollama/ollama:latest.
Step 1: Create the llama405b Namespace Firstly, create the llama405b namespace in your Kubernetes cluster to isolate the resources for this model. Step 2: Create the Entrypoint Script ConfigMap Create a ConfigMap that holds the entrypoint.sh script to control how Ollama pulls and serves the model (https://ollama.com/library/llama3.1:405b?ref blog.runpod.io). Apply this ConfigMap to your cluster: Step 3: Deploy Ollama Service The following ollama-deployment.yaml file creates a deployment for the Ollama service, pulling the LLaMA 3.1 405B model and serving it on port 80: We must allocate at least 3 H100SXM-80 GPUs to run this very large model. Now apply the deployment to the cluster: Step 4: Expose the Ollama Service Create a LoadBalancer service to expose Ollama externally using the following manifest. ollama-service.yaml: Apply the service manifest: Note: Ensure the label type: external is added to the service to request an external IP address. The targetPort is set to 11434, which is the default port Ollama uses.
Step 5: Deploy OpenWebUI Service OpenWebUI provides a UI to interact with the deployed model. Use the following deployment manifest, openwebui-deployment.yaml: Apply the deployment manifest: Step 6: Expose OpenWebUI Service Create a LoadBalancer service to expose OpenWebUI externally by deploying the manifest, openwebui-service.yaml: Apply the service manifest: Step 7: Verifying the Deployment
Check the Logs: Monitor the logs of the ollama pod to verify that the model is being pulled successfully:
Access the Ollama pod and list the model: This command should show the llama3.1:405b model and its size. This confirms the model has been downloaded inside the pod. Step 8: Accessing the OpenWebUI Service Once the external IP for the openwebui-service is assigned, navigate to the OpenWebUI interface by visiting the provided external IP address: http:// You should see the OpenWebUI dashboard. In the dropdown menu for selecting models, you can interact with the LLaMA 3.1 405B model. Note: Because of the large size of the LLaMA 405B model, the service will be available only after the model has been downloaded inside the pod. The initial loading into GPU memory can take up to 6-7 minutes, resulting in a delayed response in OpenWebUI. Once the model is fully loaded into GPU memory, the response time is minimal. However, if there's a period of inactivity, Ollama offloads the model from GPU memory. This means the response will be delayed again, as the model needs to be reloaded into memory.
Check GPU Usage after sending a request via OpenWebUI: The output will display the GPU usage details, confirming the model is loaded into GPU memory. Alternatively to OpenWebUI, you can also use curl in your terminal to send requests to the model as following: Here the EXTERNAL-IP is the IP address of the Ollama service load balancer which listents at port 80. Just like in the case of OpenWebUI, there will be a cold start accounting for loading the model into GPU memory, so please allow some time for the response. By following this tutorial, you should be able to successfully deploy the LLaMA 3.1 405B model using Ollama and OpenWebUI and interact with it through the OpenWebUI interface.
Stable Diffusion Stable Diffusion is a popular machine learning model used for generating high-quality images from textual descriptions. Running this type of workload on the OGC Serverless Kubernetes platform allows you to leverage the scalability of serverless for computationally intensive tasks such as AI model inference.
Step 1: Deploying the Container
To deploy Stable Diffusion on Serverless Kubernetes, you need to create a deployment manifest. This manifest will specify the use of your Docker container: This deployment specifies 1 replicas of the Stable Diffusion container. The nodeSelector ensures that the pods are deployed on GPU-enabled nodes.
Configuring with Load Balancing Service A LoadBalancer service in automatically creates an external load balancer and assigns it a fixed, external IP address. Traffic from the external load balancer is directed to backend pods. Below is an example of how to define a LoadBalancer service in a Kubernetes YAML manifest for Stable Diffusion: Running Stable Diffusion in a serverless kubernetes environment showcases the Ori's ability to handle dynamic, resource-intensive tasks efficiently. By leveraging serverless, you can ensure that the infrastructure scales with the demand, optimizing both cost and performance. This setup is particularly advantageous for AI-driven applications, where computational needs can vary significantly.
Step 2: Check Status 1. View your Running Pods Outputs: 2. View your Pods Details Outputs: Step 3: Access Stable Diffusion application In order to access you SD application you can do so in two ways. 1. Local access You can now access the application at http://localhost:8080. 2. Public access If you have setup an external Load Balancer, you can access the application from the public internet. In order to get the public IP address, you can run the following command: You can now access the application at http:// :8888
Get Started The following guide with walk you through the necessary steps needed to deploy your first application in Serverless Kubernetes.
Step 1: Create a cluster When creating a cluster, you will be able to choose which region you would like to use. Please note that each region will have different nodes available.
Step 2: Access the cluster Once the cluster becomes Available, you will be able to download the kubeConfig.yaml file which will be used to access the cluster.
Follow the Kubernetes guide (https://kubernetes.io/docs/tasks/tools/) to install the kubectl CLI tool. You are then able to access the cluster with the following command: This command will display all the namespaces in your cluster, indicating that your CLI is properly configured to communicate with your Kubernetes serverless cluster. :::info Documentation
You can read more about Kubernetes and kubectl in the Kubernetes official documentation (https://kubernetes.io/docs/reference/kubectl/).
:::
Step 3: Deploy your application In order to deploy an application in your GPU cluster, you should first review Node Selectors (nodeselector.md). You can find some sample applications to deploy in the Examples (examples/stablediffusion.md) section.
Node Selectors Node Selectors allow you to assign Pods to specific nodes within a cluster. This is especially useful in environments where different workloads require different hardware configurations, such as CPU-intensive or GPU-intensive tasks. By using node selectors, you can optimize resource utilization and ensure that your applications run on the most suitable hardware. :::info Documentation
You can learn more about Node Selectors in the Kubernetes documentation (https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/).
:::
Managing Node Selectors To select a particular GPU type use the selector gpu.nvidia.com/class: in your resource spec. Adding Node Selectors To add a node selector to your pod configuration, you need to include the nodeSelector field in your pod s specification. Here's how you can specify node selectors for both GPU and CPU nodes: Deploying to GPU-Enabled Nodes To deploy a pod that requires a GPU, add a node selector to your pod's specification to ensure it runs on a node equipped with a GPU. In this example, the pod gpu-pod is configured to run on nodes labeled with gpu.nvidia.com/class: H100SXM-80. The container cuda-container will utilize an NVIDIA CUDA image, specifically requiring a GPU to operate. Deploying to CPU-Only Nodes If you need to deploy a pod that does not require a GPU, you can simply omit the GPU resources. This configuration ensures that the cpu-pod, which runs a simple NGINX web server, will be placed on any node that has the capacity to run this workload.
Overview OGC Serverless Kubernetes is the simplest and quickest way to run AI/ML workloads on Kubernetes. It enables developers to focus on the workloads while the nodes are automatically managed. GPU Support: Integrated GPU resources make it ideal for compute-intensive applications, particularly in ML and AI. Serverless Managed Cluster: The Kubernetes environment is fully managed by Ori, removing the overhead of managing clusters and node resources. Familiar Experience: Offers a user experience similar to traditional Kubernetes, providing familiarity for those accustomed to kubectl. Cost-Efficient: Per-minute billing ensures users only pay for the resources they consume, optimizing cost efficiency. Scalability: Dynamically scales based on workload demands, ensuring optimal resource utilization. Currently, we are offering the following GPU types:
NVIDIA H100 SXM: for complex deep learning tasks and high performance inference.
NVIDIA H100 PCIe: ideal for inference/training of large models and best performance.
NVIDIA L4: great for graphics and for machine learning tasks such as training and inference at a great price point.
NVIDIA L40S: designed for high-performance computing, AI workloads, and advanced graphics rendering in DC and Enterprise environments.
Use Cases
Ideal for applications that require rapid scaling.
Suitable for ML/AI workloads that need GPU resources.
Perfect for developers who prefer a Kubernetes environment without the complexity of cluster management.
Resources
Go through the Get Started (getstarted.md) guide
Learn about how to use Node Selectors (nodeselector.md) to allocate specific GPU types.
Deploy a Stable Diffusion (examples/stablediffusion.md) model into your cluster.
Understand how billing (billing.md) is done with Kubernetes.
"label": "Model Registry", "position": 6, "link": "type": "doc", "id": "introduction"
Get Started This section guides you through setting up and using OGC's Model Registry to upload and manage models for deployment.
Step 1: Create a Registry
From the main navigation, go to Model Registry.
Click on the Registry tab in the sub-menu on the top.
Select Create New Registry. :::note
Every new organization starts with a default registry already configured.
If you need a custom setup, you can create a new registry and choose your preferred location this is where your model weights will be stored and made available for deployment to Endpoints.
:::
Step 2: Create a New Model Once your registry is set up:
Go to the Model sub-menu.
Click Create New Model.
In the form, provide:
A model name
The registry where it should reside
The preferred GPU types (you can select more than one) :::info
Selecting the GPU types during model creation ensures that, at deployment time, the platform can match your model with compatible GPUs available in the chosen registry location.
:::
Step 3: Upload Model Weights via CLI After creating a model, you ll be taken to its details view, where you can view and manage its versions. Example CLI command: :::info
Supported file types include: .safetensors, .json, .md, .txt, .py, .gitattributes, .yml, and LICENSE
:::
Upload Status and Version Availability
Upload progress is visible on both the UI and CLI.
If the upload is interrupted or fails, the status will show as UploadFailed.
Once uploaded successfully:
The version status changes to Uploaded
Then transitions to Synchronizing
Finally becomes Available, and ready for deployment to Endpoints. Alternative: Use Fine-Tuned Models You don t always need to create and upload a model manually. When you launch a fine-tuning job (see Fine-Tuning Guide (/docs/fine-tuning/getstarted.md)), the resulting model is automatically:
Created in the registry using the base model name job suffix.
Populated with checkpoints as model versions once the job is Completed.
Made Available for deployment directly from the model version list.
Overview OGC's Model Registry is a centralized repository for managing, versioning, and deploying your custom AI models. You can bring your custom (trained) AI models and upload the model weights via the CLI, or alternatively, use our built-in Fine-Tuning (/docs/fine-tuning/overview.md) feature to automatically generate new model versions. This guide introduces how the registry works, which file types are supported, how models are uploaded, and how fine-tuned models integrate seamlessly into the model registry. Whether you are uploading an existing model or creating a new one, the Model Registry keeps your models organized, and ready for deployment.
"label": "Object Storage", "position": 8, "link": "type": "doc", "id": "introduction"
Billing Buckets are billed per-minute and based on the capacity used. Usage is updated once a day, meaning the amount charged will not always match the bucket usage. :::info
If Versioning is Enabled, please note that deleting an object will leave snapshots of the data for backup purposes. These will incur a cost if not deleted.
::: You can read more about billing in our Support (../support/overview.md) section.
Get Started
Step 1: Create bucket Create an object storage bucket, defining a name as well as the desired location. Upon creation, store the access key safely as you will need it to interact with the bucket. :::info Naming
Bucket name must be globally unique and DNS-compliant.
:::
Step 2: Configure access key To configure your Access Key ID and Secret Access Key, follow these steps:
Access your terminal (Linux/macOS)
Install the AWS CLI (https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) and run the following command: Replace the placeholders with the actual values provided on the Ori platform (leaving Region empty): Step 3: List bucket objects To check that everything has been setup correctly, you can run a list command on the bucket. Replace the placeholders with the actual values provided on the Ori platform: Step 4: Copy objects to bucket Step 5: Delete objects To delete a single object To delete every object If versioning is enabled, you may also want to delete the object versions as well as delete markers.
Overview OGC Object Storage provide a powerful and scalable way to store your objects. S3 Compatibility - Interact with a familiar S3-interface to interact with your bucket. Versioning - Leverage versioning to ensure your objects are protected from overrides and deletions. Global Availability - Use OGC's global footprint to store your data where you need it.
Resources
Go through our Get Started (getstarted.md) guide.
Understand how Billing (billing.md) works.
Explore the AWS S3 CLI (https://docs.aws.amazon.com/cli/latest/reference/s3/).
Two Factor Authentication (2FA) Two-factor authentication adds an extra layer of security to your account. When enabled, after logging in with your username and password, you will be prompted to enter a generated access code from an Authenticator App. To enable MFA:
Navigate to Settings and then Users
Find your user, and select Edit
Enable the MFA Flag
To finish setting up with your Authenticator app, you will need to log out and log in.
Follow the steps presented to configure MFA on your Authenticator app. You will now be prompted for an MFA token everytime you login.
"label": "Support", "position": 10, "link": "type": "doc", "id": "introduction"
Billing and Payments
Introduction The billing system on Ori Global Cloud is designed to provide you with a transparent and manageable approach to handling your cloud expenses. This section outlines how billing works, including payment methods, thresholds, VAT considerations and terms conditions for the services we provide.
Payment Methods and Details
Adding Payment Methods
To utilise OGC services, you must first register a payment method. Navigate to Settings, then under Billing and Usage select Create account.
Payment details are securely stored and provided via Stripe, our payment processing partner.
Upon registration of a payment method, you authorize Ori Industries to store your payment details to facilitate future transactions. :::note
At your first VM launch, Stripe will send a pre-authorisation request to the issuing bank to verify that the bank issued the card and can authorise upcoming charges. The 10 temporary uncaptured hold may appear on your statement, but you are not being charged and no funds are being transferred from your card or account unless if your first invoice payment fails.
:::
Default Payment Method
The payment method designated as Default will be automatically charged at each billing cycle or upon reaching the payment threshold.
You can manage your payment methods, including setting a default, changing, adding, or deleting details, through the billing portal.
Billing Thresholds
Payment Threshold
Billing is triggered when the cost of billable services reaches a predetermined payment threshold.
If you wish to adjust your payment threshold, requests can be made through Ori's support channels at support ori.co.
Billing and Invoices
Pay-as-You-Go
For On-demand services, resources are billed per minute to ensure you only pay for what you use.
Payment will be triggered when the billing threshold is reached or otherwise monthly (starting from the first invoice).
Local Taxes and Pricing
All displayed prices on Ori Global Cloud do not include local tax.
Payment Issues and Service Suspension
Service Provisioning
Users are required to have a valid payment method registered before provisioning any services on the platform.
Failed Payments
If a payment fails to process, all services will be at risk of suspension and will be subject to deletion if the payment is not settled in a timeline manner.
You will be notified of the payment failures via email.
Service Suspension
In the event of a service suspension due to payment issues, all services will be suspended and services may become unavailable.
Once the payment is settled, resources will be able to be resumed by the user.
Agreement See the Billing Agreement (billingagreement.md) page for additional information.
GPU Virtual Machine Service Billing Agreement
Terms and Conditions of Service
Introduction This Billing Agreement ("Agreement") sets forth the terms and conditions under which Ori Global Cloud ("OGC", "we", "us", or "our") offers its GPU Virtual Machine Service ("Service") to you ("Customer", "you", or "your"). By using our Service, you agree to be bound by this Agreement, our Privacy Policy, and our Terms of Service, as they may be amended from time to time.
Service Description OGC s GPU Virtual Machine Service provides customers with access to virtual machines equipped with GPU capabilities for computing tasks such as data processing, machine learning, and other GPU-intensive workloads.
Billing and Payment
3.1 Payment Method
You must provide a valid payment method to use the Service. We accept payments through Stripe for credit/debit card transactions.
By registering a payment method, you authorize OGC to charge your payment method for all charges incurred in connection with your use of the Service.
3.2 Billing Cycle
On-demand Services are billed on a pay-as-you-go basis and is calculated per minute of usage.
Billing commences when a Service instance is available and stops when the instance is terminated.
3.3 Payment Threshold
Billing is triggered when your accumulated charges reach the predefined payment threshold or at the end of the billing cycle, whichever comes first.
You may request an adjustment to your billing threshold by contacting support ori.co.
3.4 Taxes
Prices for the Service do not include applicable taxes. You are responsible for any taxes that arise from your use of the Service in your Jurisdiction.
For UK customers VAT is automatically calculated and included in the invoice and payment collection.
3.5 Failed Payments
In case of a failed payment, services will be suspended shortly after and may be removed if payment is not resolved. Suspended Services can be resumed once payment issues are resolved.
Changes to Billing Agreement
OGC reserves the right to modify this Billing Agreement at any time. Changes will become effective immediately upon posting to our website or when we notify you otherwise.
Cancellation and Termination
You may cancel your Service at any time. Billing will cease upon termination of the Service.
OGC reserves the right to terminate this Agreement and suspend the Service for any breach of these terms.
Refunds and Disputes
Charges for the Service are non-refundable, except as required by law or as explicitly set forth in this Agreement.
Any billing disputes must be reported to OGC within 60 days of the transaction in question.
Limitation of Liability and Data Loss
7.1 Responsibility for Data
The Customer is solely responsible for the security, protection, backup, and replication of their data stored on the Service. Ori Global Cloud recommends that Customers regularly back up their data as part of a comprehensive data management strategy.
7.2 Loss of Data
Ori Global Cloud shall not be held liable for any loss of data under any circumstances, including but not limited to, data loss resulting from operational errors by the Customer, service suspensions, or terminations due to billing issues, or during the enforcement of the terms of this Agreement.
7.3 Service Termination
In the event of termination of the Customer s VM(s) or Service, whether initiated by the Customer or by OGC due to non-compliance with this Agreement, OGC will not be responsible for any loss of data that may occur as a result of such termination.
7.4 Data Recovery
OGC does not guarantee that data can be recovered once a Service instance has been terminated or suspended. It is the Customer's responsibility to ensure that their data is securely backed up.
Contact Information For billing support or to dispute a charge, please contact us at support ori.co.
Governing Law This Agreement shall be governed by and construed in accordance with the laws of the jurisdiction of the United Kingdom, where OGC is registered, without regard to its conflict of law provisions.
Acknowledgment By using the Service, you acknowledge that you have read, understood, and agree to be bound by the terms of this Billing Agreement.
Data Center Certifications Ori Global Cloud is committed to maintaining the highest standards of information security, privacy, and compliance. Ori Industries 1 Ltd ( Ori ) is subject to an ISO 27001 audit annually, demonstrating our commitment to the highest standards of information security management. In addition, we are currently undergoing a SOC 2 audit, further reinforcing our dedication to data security. This audit ensures that our systems and processes meet rigorous industry standards, providing our customers and partners with the confidence that their data is protected. All of our hosting data centers adhere to stringent compliance requirements, with each facility being either SOC 2 or ISO 27001 certified, and the vast majority meeting both standards. This robust framework of compliance across our operations and infrastructure provides our customers with the assurance that their data is handled securely and in accordance with internationally recognized standards. Compliance information for Public Cloud hosting DCs: Name CountryCode Continent Code Certifications
Frequently Asked Questions This page provides answers to common questions and common issues.
What is Ori Global Cloud (OGC)? OGC is an end-to-end AI/ML cloud platform specializing in GPU resources and services tailored for machine learning (ML) and artificial intelligence (AI) applications. It aims to empower developers, data scientists, and organizations by providing advanced cloud solutions to revolutionize various sectors.
How do I get started with using Ori Global Cloud? To start using OGC, all you need to do is sign up (https://www.ori.co/signup)! How do I report a bug or issue? If you encounter a bug or need help troubleshooting an issue, you can raise a support case through our dedicated support portal (https://oriindustries.atlassian.net/servicedesk/customer/portals).
How do I request an increase in my service quotas? If you find that your current service quotas do not meet your project's needs, you can request an increase through the platform directly, or by raising a ticket through our dedicated support channel (https://oriindustries.atlassian.net/servicedesk/customer/portals).
How is billing handled on Ori Global Cloud? Billing for on-demand Ori Global Cloud services is based on the resources used. Detailed information about billing and payment methods can be found in our billing page (billing/billing.md).
Is there a free trial available for Ori Global Cloud? If you are looking to get use OGC, we can offer free credits to help get you started! Please get in touch with us at support ori.co and tell us more about your use case.
Where can I find compliance information about Ori? All of Ori s Data Centers are ISO27001 and/or SOC2 compliant . See detailed information on our compliance page compliance page. (datacentercert.md)
Support Welcome to the OGC Support documentation! Our goal is to provide you with the necessary resources and information to help you troubleshoot and resolve any issues you may encounter while using OGC. This documentation is designed to be a comprehensive guide to the troubleshooting and support process for OGC, including detailed instructions and best practices for resolving common issues. The documentation is divided into several sections, each covering a specific aspect of the OGC platform. From common troubleshooting steps and frequently asked questions, to how to raise a support case and how to check the status of the OGC Platform, this documentation will provide you with the information you need to keep your applications running smoothly. In addition to this documentation, the OGC support team is available to help you with any questions or issues you may have. Whether you need help resolving a specific issue or have a general question about the platform, our team is here to help. With this documentation and the support of our team, you will have all the resources you need to keep your OGC deployment running smoothly and effectively. So, let's get started!
Support Resources For Support and Community discussions (https://edgehogs.slack.com/) Checking the status of Ori Global Cloud (https://status.ogc.ori.co/) Raising a case with Support (https://oriindustries.atlassian.net/servicedesk/customer/portals) Frequently Asked Questions (faq.md)
Roles OGC Roles allows for Role Based Access Control (RBAC) to be applied to valid resources. Roles:
Owner - Full access to all resources and billing.
Editor - View manage all resources, except for billing.
Viewer - Ability to view resources and billing but not manage or action. These can be set for Users and API Tokens, restricting the operations allowed when taking an action under these roles.
Support If you encounter any issues or need assistance with our product, our support team is here to help.
How to Submit a Support Request Support requests can be submitted through the OGC Dashboard via the Support Desk here (https://oriindustries.atlassian.net/servicedesk/customer/portal/3/group/11). When submitting a ticket, please provide:
A detailed description of the issue.
Affected resource(s) (e.g., VM ID, Cluster Name, Supercomputer node, Endpoint Name)
Steps to reproduce the issue (if applicable)
Logs or screenshots (optional but helpful)
Support Response Times Ori Global Cloud (OGC) is committed to providing fast and reliable support to ensure smooth operation of your compute resources. Supported Resources OGC users may request support for issues related to the following resources provisioned on our platform: VMs (Virtual Machines) Supercomputers (Bare Metal Machines) Kubernetes Clusters Endpoints (Popular AI models for real-time inferencing)
Response Time Business Hours OGC Support is available 7 days a week, including weekends and public holidays. While our teams operate across time zones to ensure consistent coverage, please note that Severity 3 and Severity 4 issues may experience minor delays during weekends or holidays, as priority is given to higher-severity incidents.
Terms of Use Ori Industries 1 Ltd ------------------------ This Ori Global Cloud User Agreement, including all documents and terms incorporated by reference herein (collectively, the Agreement ), is entered into by and between Ori Industries 1 Ltd, a United Kingdom company with its principal place of business at Tintagel House, 92 Albert Embankment, London, SE1 7TY, UK ( Company ) and the organisation you identified when you registered to use the Service ( Customer ). This Agreement is effective on the date you registered to use the Service (the Effective Date ). By registering to use the service, you agree to the terms and conditions of this agreement as an individual or on behalf of your organisation. You represent and warrant that you have the legal authority to bind yourself and/or your organisation to this agreement, and that you have read and understood this agreement. If you do not have such authority, or if you or your organisation does not agree with the terms of this agreement, you should not accept it. Company may make access to the Service(s) and certain features subject to certain requirements or conditions including but not limited to requesting customer information, and meeting specific eligibility requirements.
Definitions
" Party " means any person or entity that controls, is controlled by, or is under common control with another person or entity, where "control" means ownership of fifty percent (50 ) or more of the outstanding voting securities. Authorised User means a named individual that (a) is an employee, representative, consultant, contractor or agent of Customer or a Customer Affiliate; (b) is authorised to use the Service pursuant to this Agreement; and (c) has been supplied a user identification and password by Customer. Commitment Period refers to the duration beginning from the Effective date of Service(s), and ending at the conclusion of the agreed upon term, where Customer has committed to a specified contractual term for a duration specified in the attached Order Form of this agreement. Customer Data means any electronic data or materials provided or submitted by Customer or Authorised Users to or through the Service. Documentation means the online help materials, including technical specifications, describing the features and functionality of the Service, which are located on the Company s publicly-available website at ori.co, as updated by the Company from time to time. Intellectual Property Rights means all current and future worldwide intellectual property rights, including without limitation, all patents, copyrights, trademarks, service marks, trade names, domain name rights, know-how and other trade secret rights, and all other intellectual property rights and similar forms of protection, and all applications and registrations for any of the foregoing. Service means the applicable version of the Company s Ori Global Cloud hosted software application, including any associated GPU compute and managed services. Subscription Term(s) means, unless a different period is specified on the Company web page where Customer registers for the Service, either an ongoing subscription period(s) of one (1) year during which Authorised Users may use the Service, subject to the terms of this Agreement, or a specified contractual term ( Commitment Period ) as defined by the date and other particulars outlined in the relevant contractual agreement. Support Services means the maintenance and support services provided by the Company to Customer during the Subscription Term, as more fully described in Section 2.3 below. LICENCE AND SUPPORT SERVICES --------------------------------
Licence and Access Rights to the Service. The Company will host the Service and will make the Service available to Customer during the Subscription Term(s), subject to the terms and conditions of this Agreement. The Service is offered to the Customer at no cost, unless the Customer has selected a paid version of the Software or agreed to a Commitment Period. Customer s access and usage of the Service may not exceed the defined limits of the Service, and may not interfere with other users utilisation of the Service. The Company may update the content, features, functionality, and user interface of the Service from time to time in its sole discretion, and may discontinue or suspend all or any portion of the Service at any time in its sole discretion, including during a Subscription Term; provided, that the Company will give Customer at least fifteen (15) days advance notice before discontinuing the Service or materially decreasing the functionality of the Service(s) during the Subscription Term. The Company grants the Customer a limited, non-exclusive, non-sublicensable, non-transferable (except as specifically permitted in this Agreement) right to access and use the Service and its Documentation during the Subscription Term, solely for Customer s internal business purposes. Customer may permit its Affiliates to use and access the Service and Documentation in accordance with this Agreement, but Customer will be responsible for the compliance of all Affiliates with this Agreement. For the avoidance of doubt, the Ori Global Cloud hosted software application Service is available only on a hosted basis, and the Customer will not independently possess, run, or install the Service.
Restrictions. Except as otherwise expressly set forth in this Agreement, Customer will not and will not permit any third party to: (a) sublicense, sell, transfer, assign, distribute or otherwise grant or enable access to the Service in a manner that allows anyone to access or use the Service without an Authorised User subscription, or to commercially exploit the Service; (b) copy, modify or create derivative works based on the Service; (c) reverse engineer or decompile the Service (except to the extent permitted by applicable law and only if the Company fails to provide permitted interface information within a reasonable period of time after Customer s written request); (d) copy any features, functions or graphics of the Service; (e) allow Authorised User subscriptions to be shared or used by more than one individual Authorised User (except that Authorised User subscriptions may be reassigned to new Authorised Users replacing individuals who no longer use the Service for any purpose, whether by termination of employment or other change in job status or function); or (f) access to or use of the Service: (i) to send, store, or serve as the infrastructure to facilitate infringing, obscene, threatening, or otherwise unlawful, unethical and/or potentially harmful material, including without limitation incitements to violence, defamatory material, public disinformation campaigns, and/or material violative of third-party privacy rights; (ii) in violation of applicable laws; (iii) to send or store material containing software viruses, worms, Trojan horses or other harmful computer code, files, scripts, or agents; (iv) in a manner that interferes with or disrupts the integrity or performance of the Service (or the data contained therein); (v) to gain unauthorised access to the Service (including unauthorised features and functionality) or its related systems or networks; (vi) Circumvent defined limits on an account in an unauthorised manner; (vii) Abuse referrals, promotions or credits to get more features than paid for; or (viii) Access, search, or create accounts for the Service by any means other than the Company s publicly supported interfaces (for example, scraping or creating accounts in bulk).
Support Services. During the Subscription Term, the Company will provide email support for the Service, which Customer may request by emailing the Company at support ori.co. Customer agrees to request support only for the Service licensed under this Agreement. Usage Limits. Use of the Service is subject to any usage limits, which may include limitations on features and functionality, that are set forth on the Company webpage where the Customer registered for the Service. If Customer exceeds any such limits, Customer will promptly notify the Company and work with the Company to promptly change its usage to comply with the limits. The Company may periodically verify that Customer s use of the Service is within the applicable usage limits, and Customer will promptly and accurately certify and/or provide evidence of Customer s compliance with the applicable usage limits as may be requested by the Company from time to time. CUSTOMER RESPONSIBILITIES FOR CUSTOMER DATA AND AUTHORISED USERS -------------------------------------------------------------------- Customer agrees to promptly notify the Company of any unauthorised access to Authorised User accounts of which Customer becomes aware. Customer has exclusive control and responsibility for determining what data Customer submits to the Service, for obtaining all necessary consents and permissions for submission of Customer Data and processing instructions to the Company, and for the accuracy, quality and legality of Customer Data. Customer is further responsible for the acts and omissions of Authorised Users in connection with this Agreement, for all use of the Service by Authorised Users, and for any breach of this Agreement by Authorised Users. The Customer will use reasonable measures to prevent and will promptly notify the Company of any known or suspected unauthorised use of Authorised User access credentials. INTELLECTUAL PROPERTY RIGHTS AND OWNERSHIP ---------------------------------------------- Ownership. The Service and Documentation, all copies and portions thereof, and all Intellectual Property Rights therein, including, but not limited to derivative works therefrom, are and will remain the sole and exclusive property of the Company notwithstanding any other provision in this Agreement. Customer is not authorised to use (and will not permit any third party to use) the Service, Documentation or any portion thereof except as expressly authorised by this Agreement. Licence to Customer Data. Customer grants the Company a worldwide, non-exclusive licence to host, copy, process, transmit and display Customer Data as reasonably necessary for the Company to provide the Service in accordance with this Agreement. Subject to this limited licence, as between Customer and the Company, Customer owns all right, title and interest, including all related Intellectual Property Rights, in and to the Customer Data. Customer Use. Customer acknowledges and consents that: (1) Ori may utilise customer feedback and the knowledge acquired from customer usage of the service, which customer agrees to provide freely; (2) Ori maintains exclusive ownership of all intellectual property rights pertaining to the service, along with any enhancements, alterations, and/or derivative works resulting from customer use of the platform; (3) customer grants Ori the authority to showcase customer company name and logo in conjunction with use of the platform. Use of Aggregate Information. the Company may collect and aggregate data derived from the operation of the Service ( Aggregated Data ), and the Company may use such Aggregated Data for purposes of operating the Company s business, monitoring performance of the Service, and/or improving the Service; provided, that the Company s use of Aggregated Data does not reveal any Customer Data, Customer Confidential Information, or personally identifiable information of Authorised Users. PAYMENTS; COMMITMENTS ------------------------- Customer agrees to pay Company all charges at the prices then in effect for the products Customer and Authorised User may purchase. Customer further authorises Company to charge chosen payment method for any recurring purchases or payments due. Sales tax will be added to the sales price of purchases as deemed required by Company. Company may temporarily suspend Customer and Authorised Users s right to access or use any portion or all of the Service immediately upon notice to Customer if it is determined Customer is in breach of payment obligations under the Subscription Term and Commitment Period. If Customer is invoiced an amount and the payment is not received by Company by the due date, then, without limiting Company's rights or remedies, any amounts owed may accrue late interest at a rate of 1.5 of the outstanding balance per month or the maximum rate permitted by law, whichever is lower. Such conditions allow Company to change payment terms to payment in advance or shorter payment terms in future. If Customer falls overdue on any charged fees, Customer authorises Company to take fees due by any payment methods on record. If Company is unable to take payment, Company may without limiting its other rights and remedies, suspend or terminate Customer access to the Service or account, and may withhold funds in those associated accounts until such amounts due are paid in full. In the event that Customer fails to pay any amounts owed under this agreement within the payment terms, Company shall have the right to refer the outstanding debt to a third-party collection agency or take any legal action measure required to remedy required to collect its due payments. Customer shall be responsible for paying any and all fees and costs incurred by Company in connection with the collection of the overdue amounts, including without limitation, collection agency fees, court costs, attorney's fees, and any associated administrative fee imposed by Company. Administrative fee shall cover Company's expenses related to the collection process, including but not limited to administrative costs and overheads. TERM; TERMINATION --------------------- Effective Date and Term. This Agreement commences on the Effective Date. Unless earlier terminated pursuant to the terms of this Section 5, the Agreement will continue through the Subscription Term. Unless one party notifies the other more than fifteen (15) days before the end of a Subscription Term, each Subscription Term will automatically renew for an additional Subscription Term of the same length. Termination for Cause. Either Party may terminate this Agreement immediately upon written notice to the other Party: (a) if the other Party breaches or fails to perform or observe any material term or condition of this Agreement and such default has not been cured within thirty (30) days after written notice of such default to the other Party, not withstanding any pre-communicated delays from the Company related to the delivery of service to the Customer; or (b) if the other Party (i) terminates or suspends its business, (ii) becomes subject to any insolvency proceeding under federal or state statute, (iii) becomes insolvent or subject to direct control by a trustee, receiver or similar authority, or (iv) has wound up or liquidated, voluntarily or otherwise. For the avoidance of doubt, termination of this Agreement will result in the termination of all Subscription Terms. Termination for Convenience; Suspension. Either Party may terminate this Agreement for any reason or no reason by providing the other party at least fifteen (15) days prior written notice for ongoing subscription period(s). In cases where Customer has a Commitment Period, termination of the Service(s) by the Customer is only permissible once the agreed term has lapsed or the value of the Commitment Period has been paid in full according to the term agreed in the attached Order Form of this agreement. In addition, the Company may discontinue or suspend Customer s access to the Service immediately if Customer has (or the Company reasonably suspects that Customer has) breached Section 2.2 or infringed the Company s Intellectual Property Rights. Effect of Termination. Upon expiration or termination of this Agreement for any reason: (a) the Company s obligation to provide Support Services and the Service will terminate, (b) all of Customer s and its Authorised Users rights to use the Service will terminate, and (c) the provisions of Sections 4.3, 6.4, 7, 8, 9, and 10 of this Agreement will survive such expiration or termination. Treatment of Customer Data Following Expiration or Termination. Customer agrees that following termination of this Agreement, the Company may immediately deactivate Customer s account(s) for the Service, and the Company has the right to delete those accounts, including all Customer Data, from the Company s site unless legally prohibited. Customer acknowledges and agrees that it is responsible to retrieve Customer Data from the Service prior to expiration of this Agreement. REPRESENTATIONS AND WARRANTIES ---------------------------------- By Each Party. Each Party represents and warrants that it has the power and authority to enter into this Agreement and that its respective provision and use of the Service is in compliance with laws applicable to such Party. Conformity with Documentation. The Company warrants that, during the Subscription Term, the Service will perform materially in accordance with the applicable Documentation. In the event of a material breach of the foregoing warranty, Customer s exclusive remedy and the Company s entire liability will be for Customer to request the Company s assistance through the Support Services, which the Company will provide in accordance with its obligations under Section 2.3 ( Support Services ). Malicious Code. The Company warrants that, to the best of its knowledge, the Service is free from, and the Company will not knowingly introduce, software viruses, worms, Trojan horses or other code, files, scripts, or agents intended to do harm. Warranty disclaimers. Except for the exclusive warranties set forth in this section 6, to the maximum extent permitted under applicable law, the service is provided as is without warranty of any kind, and the Company makes no warranties, express, implied, statutory, or otherwise, with regarding or relating to the service, documentation or support services. The Company specifically and explicitly disclaims all other warranties, express and implied, including without limitation the implied warranties of merchantability, fitness for a particular purpose, non-infringement, those arising from a course of dealing or usage or trade, and all such warranties are hereby excluded to the fullest extent permitted by law. further, The Company does not warrant the service will be error-free or that the use of the service will be uninterrupted. INDEMNIFICATION ------------------- By the Company. Subject to the remainder of this Section 7 and the liability limitations set forth in Section 8, the Company will: (a) defend Customer against any third party claim that the Service infringes any trademark or copyright of such third party, enforceable in the jurisdiction of Customer s use of the Service, or misappropriation of a trade secret (but only to the extent that such misappropriation is not a result of Customer s actions) ( Infringement Claim ); and (b) indemnify Customer against and pay any settlement of such Infringement Claim consented to by the Company or any damages finally awarded against Customer to such third party by a court of competent jurisdiction. The Company will have no obligation and assumes no liability under this Section 7 or otherwise with respect to any claim to the extent based on: (a) any modification of the Service that is not performed by or on behalf of the Company, or was performed in compliance with Customer s specifications; (b) the combination, operation or use of the Service with any Customer Data or any Customer or third party products, services, hardware, data, content, or business processes not provided by the Company where there would be no Infringement Claim but for such combination; (c) use of the Service other than in accordance with the terms and conditions of this Agreement and the Documentation; or (d) Customer s or any Authorised User s use of the Service other than as permitted under this Agreement Service. This section 7 states Customer s sole and exclusive remedy and the Company entire liability for any infringement claims or actions. Remedies. Should the Service become, or in the Company s opinion be likely to become, the subject of an Infringement Claim, the Company may, at its option (i) procure for Customer the right to use the Service in accordance with this Agreement; (ii) replace or modify, the Service to make it non-infringing; or (iii) terminate Customer s right to use the Service and discontinue the related Support Services. By Customer. Customer will defend, indemnify and hold harmless the Company and its Affiliates, and their directors, officers, employees, agents and licensors, from and against any damages and costs (including reasonable attorneys fees and costs incurred by the indemnified parties) finally awarded against them in connection with any claim arising from (i) Customer s use of the Service or (ii) Customer Data; provided, that Customer will have no obligation under this Section 7.3 to the extent the applicable claim arises from the Company s breach of this Agreement. Indemnity Process. Each Party s indemnification obligations are conditioned on the indemnified party: (a) promptly giving written notice of the claim to the indemnifying Party; (b) giving the indemnifying Party sole control of the defence and settlement of the claim; and (c) providing to the indemnifying Party all available information and assistance in connection with the claim, at the indemnifying Party s request and expense. The indemnified Party may participate in the defence of the claim, at the indemnified Party s sole expense (not subject to reimbursement). Neither Party may admit liability for or consent to any judgement or concede or settle or compromise any claim unless such admission or concession or settlement or compromise includes a full and unconditional release of the other Party from all liabilities in respect of such claim. LIMITATION OF LIABILITY --------------------------- Damages Exclusion; Liability Cap. In no event will either Party or its Affiliates or Licensors be liable under this Agreement for any consequential, incidental, special, indirect, punitive or exemplary damages, including without limitation lost profits, loss of use, business interruptions, loss of data, revenue, goodwill, production, anticipated savings, or costs of procurement of substitute goods or services, whether alleged as a breach of contract or tortious conduct, including negligence, even if a Party has been advised of the possibility of such damages. Except with respect to liability arising from its obligations under section 7 ( indemnification ) (for which the liability limitation is one hundred thousand dollars ( 100,000) in the aggregate), in no event will the Company total aggregate liability arising under this agreement exceed ten thousand dollars ( 10,000). Nothing in this section 8.1 will be deemed to limit either party s liability for willful misconduct, gross negligence, fraud, or infringement by one party of the other s intellectual property rights. Limitations Fair and Reasonable. Each Party acknowledges that the limitations of liability set forth in this Section 8 reflect the allocation of risk between the Parties under this Agreement, and that in the absence of such limitations of liability, the economic terms of this Agreement would be significantly different. CONFIDENTIAL INFORMATION ---------------------------- Confidentiality. Confidential Information means this Agreement, the Service, the Company pricing information, the Company technical information, Customer Data and any other information disclosed by one party ( Discloser ) to the other ( Recipient ) in connection with this Agreement that is designated as confidential or that reasonably should be understood to be confidential given the nature of the information and the circumstances of disclosure. Recipient may use Discloser s Confidential Information solely to perform Recipient s obligations or exercise its rights hereunder. Recipient will not disclose, or permit to be disclosed, Discloser s Confidential Information to any third party without Discloser s prior written consent, except that Recipient may disclose Discloser s Confidential Information solely to Recipient s employees and/or subcontractors who have a need to know and who are bound in writing to keep such information confidential pursuant to confidentiality agreements consistent with this Agreement. Recipient agrees to exercise due care in protecting Discloser s Confidential Information from unauthorised use and disclosure, and in any case will not use less than the degree of care a reasonable person would use. The foregoing will not apply to any information that: (a) was in the public domain at the time it was communicated to the Recipient by the Discloser; (b) entered the public domain subsequent to the time it was communicated to the Recipient by the Discloser through no fault of the Recipient; (c) was in the Recipient s possession free of any obligation of confidence at the time it was communicated to the Recipient by the Discloser; (d) was rightfully communicated to the Recipient free of any obligation of confidence subsequent to the time it was communicated to the Recipient by the Discloser; (e) it was developed by employees or agents of the Recipient independently of and without reference to any information communicated to the Recipient by the Discloser; or (f) is expressly permitted to be disclosed pursuant to the terms of this Agreement. Compelled Disclosure. The Recipient will not be in violation of Section 9.1 regarding a disclosure that was in response to a valid order by a court or other governmental body, provided that the Recipient provides the Discloser with prior written notice of such disclosure in order to permit the Discloser to seek confidential treatment of such information. Feedback. To the extent Customer provides any suggestions, recommendations or other feedback specifically relating to the Service or Support Services (collectively, Feedback ), Customer grants to the Company a royalty free, fully paid, sub-licensable, transferable (notwithstanding Section 10.1 ( Assignment ), non-exclusive, irrevocable, perpetual, worldwide right and licence to make, use, sell, offer for sale, import and otherwise exploit Feedback (including by incorporation of such Feedback into the Service without restrictions). Sensitive Data. Customer agrees that it will not submit the following types of information to the Service except with the Company s prior written approval: government-issued identification numbers, consumer financial account information, credit and payment card information, personal health information, or information deemed sensitive under applicable law (such as racial or ethnic origin, political opinions, or religious or philosophical beliefs) or personal data (as described in the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data) of data subjects that reside in the European Economic Area (EEA). If Customer wishes to submit any such European personal data to the Service, Customer will notify the Company and the parties may enter into a separate data processing agreement (including the European Commission s Standard Contract Clauses for the transfer of personal data to processors established in third countries which do not ensure an adequate level of data protection) with the Company prior to submission of such personal data to the Service. Customer represents and warrants that it has obtained all necessary consents and permissions from data subjects for the submission and processing of personal data in the Service. GENERAL ----------- Assignment. Neither Party may assign this Agreement, in whole or in part, without the prior written consent of the other Party, provided that no such consent will be required to assign this Agreement in its entirety to (i) an Affiliate that is able to satisfy the obligations of the assignor under this Agreement or (ii) a successor in interest in connection with a merger, acquisition or sale of all or substantially of the assigning Party s assets, provided that the assignee has agreed to be bound by all of the terms of this Agreement and all fees owed to the other Party are paid in full. If Customer is acquired by, sells substantially all of its assets to, or undergoes a change of control in a favour of, a direct competitor of the Company, then the Company may terminate this Agreement immediately upon written notice. Anti-Corruption. Each Party acknowledges that it is aware of, understands and has complied and will comply with, all applicable U.S. and foreign anti-corruption laws, including without limitation, the U.S. Foreign Corrupt Practices Act ( FCPA ) and the U.K. Bribery Act. Notices. Notices to a Party will be sent by first-class mail, overnight courier or prepaid post to the address for such Party as identified on the first page of this Agreement and will be deemed given seventy-two (72) hours after mailing or upon confirmed delivery or receipt, whichever is sooner. The Customer will address notices to the Company Legal Department, with a copy to legal ori.co. Either Party may from time to time change its address for notices under this Section by giving the other Party at least thirty (30) days prior written notice of the change in accordance with this Section 10.3. Consent to Electronic Communications. By using the Website and/or Services, you consent to receiving certain electronic communications from us as further described in our Privacy Policy. Please read our Privacy Policy to learn more about our electronic communications practices. You agree that any notices, agreements, disclosures, or other communications that we send to you electronically will satisfy any legal communication requirements, including that those communications be in writing. Non-waiver. Any failure of either Party to insist upon or enforce performance by the other Party of any of the provisions of this Agreement or to exercise any rights or remedies under this Agreement will not be interpreted or construed as a waiver or relinquishment of such Party s right to assert or rely upon such provision, right or remedy in that or any other instance. Governing Law. The Agreement and any dispute or claim arising out of or in connection with it or its subject matter or formation (including non-contractual disputes or claims) shall be governed by and construed in accordance with the law of England and Wales. Each party irrevocably agrees that the courts of England and Wales shall have exclusive jurisdiction to settle any dispute or claim arising out of or in connection with this Agreement or its subject matter or formation (including non-contractual disputes or claims). Severability. If any provision of this Agreement is held invalid or unenforceable under applicable law by a court of competent jurisdiction, it will be replaced with the valid provision that most closely reflects the intent of the Parties and the remaining provisions of the Agreement will remain in full force and effect. Relationship of the Parties. Nothing in this Agreement is to be construed as creating an agency, partnership, or joint venture relationship between the Parties hereto. Neither Party has any right or authority to assume or create any obligations or to make any representations or warranties on behalf of any other Party, whether expressed or implied, or to bind the other Party in any respect whatsoever. Each Party may identify the other as a customer or supplier, as applicable. Entire Agreement; Execution. This Agreement comprises the entire agreement between Customer and the Company, and supersedes all prior or contemporaneous proposals, quotes, negotiations, discussions, or agreements, whether written or oral, between the Parties regarding its subject matter. In the event of a conflict between the terms of this Agreement and any other document referenced in this Agreement, this Agreement will control. Any pre-printed terms on any Customer ordering documents or terms referenced or linked therein will have no effect on the terms of this Agreement and are hereby rejected, including where such Customer ordering document is signed by the Company. This Agreement may be executed in counterparts, which taken together form one binding legal instrument. The Parties hereby consent to the use of electronic signatures in connection with the execution of this Agreement, and further agree that electronic signatures to this Agreement will be legally binding with the same force and effect as manually executed signatures.
PyTorch on VMs PyTorch's seamless integration with GPUs enables faster computation and training times. This guide will help you set up PyTorch on a GPU-enabled Virtual Machine (VM), ensuring you can leverage the full power of GPU acceleration.
Prerequisites Before installing PyTorch on your GPU VM, ensure that the following prerequisite is met:
NVIDIA Driver - Ensure that the correct NVIDIA driver for your GPU is installed on the VM. The driver version should be compatible with the CUDA version you intend to use. Here is the reference. To check the NVIDIA driver version: This command will display the installed NVIDIA driver version and the available GPUs. Installing PyTorch Once the prerequisite is in place, you can install PyTorch using one of the following methods: Using pip, you can install PyTorch by running the following command: To verify that PyTorch is correctly installed and can utilise the GPU, run the following Python command: If CUDA is available and PyTorch is using the GPU, you should see True printed in the console. :::tip Tip
On SXM virtual machines, if you see False' printed in the console, then you need to disable the NVLink. Verify NVlink is disabled - this needs to be N/A, NOT In Progress by running the following command: This should return the following: ::: With PyTorch installed on a GPU-enabled VM, you re now ready to develop and train machine learning models with accelerated performance. Whether you re working on deep learning projects or exploring AI research, PyTorch provides the flexibility and power to bring your ideas to life.
"label": "ML Tools", "position": 11, "link": "type": "doc", "id": "Introduction"
"label": "Virtual Machines", "position": 2, "link": "type": "doc", "id": "introduction"
Actions Suspend Restart Suspending a VM takes a snapshot of the disk and while freeing the compute resource. This means that you are able to, at any moment, pause your work and resume it at a later point. :::info Billing
A suspended machine is not billed for its compute. However, the archived disk will be charged based on usage.
::: Once an instance is Suspended, you will then be able to Resume it. This will create a new VM and load the existing disk into the machine. :::info
Suspending and Resuming machines can take up to 1 hour, this will vary based on the GPU type as well as the size of the disk.
:::
Restart Restarting a VM will perform a hardware-level reboot.
Billing Virtual Machines are billed per-minute and based on the GPU resource being used. Pricing per machine type is available in our pricing page (https://www.ori.co/pricing). :::info The billable states are: Available, Paused, Rebooting, Deleting. Suspended machines are billed for the storage used.
::: :::note
Note that when you first attempt to launch a Virtual Machine, we will place a 10 deposit charge your card. This charge will be automatically released after 7 days.
::: You can read more about billing in our Support (../support/overview.md) section.
import Tabs from ' theme/Tabs';
import TabItem from ' theme/TabItem'; Get Started In the following guide, you will go through the necessary steps needed to deploy your first application in a Virtual
Machine.
Step 1: Create a VM To create a VM, there are a series of configurations options you can choose from GPU Type, Location and Init Script. To create a VM via the OGC portal, navigate to the Virtual Machine (https://console.ogc.ori.co) page. You can create a VM via the CLI tool. Here is an sample command: You can find a more in-depth example in our CLI Examples (../cli/examples.md) page. Step 2: Set up the machine Once the machine becomes Available and an IP Address is assigned, you will be able to access it via
the SSH key (sshkeys.md) you configured for the machine. If needed, you can install the necessary NVIDIA drivers that will enable you to leverage the underlying GPUs: Ubuntu Guide (nvidia.md) Debian Guide (nvidia-debian.md) // : ( Step 3: Run your application) // : (You are now ready to run your application on your VM.) // : (As an example, if you wanted to run a Jupyter notebook instance, you would need to run the following commands:) // : (
)
Mount an NVMe drive in Ubuntu (H100 VMs)
Step 1: Identify the NVMe Drive
Open a terminal and run: Your NVMe drives will appear as /dev/nvme0n1
Step 2: Open fdisk to Partition the Drive
Use fdisk to partition the drive: Note: Make sure to replace /dev/nvme0n1 with the correct drive if necessary.
Step 3: Partition the Drive with fdisk
Once inside fdisk, use these steps to create a new partition:
Type n to create a new partition.
Choose a partition number (usually 1 if it s a new drive).
Specify the first and last sector (you can press Enter to use the defaults, which will use the entire drive up to a max of 2T because it is using MBR. If you need more space, see below section Converting from MBR to GPT for instructions).
Write the changes by typing w and pressing Enter.
This process creates a single partition on the drive, which you ll see as /dev/nvme0n1p1.
Step 4: Format the Partition
Format the new partition with a filesystem (ext4 in this case). Step 5: Create a Mount Point
Choose a directory where you d like to mount the NVMe drive, or create one if it doesn t exist. For example:
Step 6: Mount the Drive
Mount the partition (e.g., /dev/nvme0n1p1) to your chosen mount point:
Step 7: Verify the Mount
You can verify that the drive is mounted by listing the contents of the mount point:
Step 8: Automount on Boot
To mount the drive automatically on boot, edit the /etc/fstab file: Add a new line with the following format: To test, unmount the drive: Then remount all drives listed in /etc/fstab:
Step 9: Change Permissions
You can set read, write, and execute permissions on the mount point as needed. Here are some common examples:
Full access to everyone:
Full access to the owner, read access to others:
Full access only for the owner:
Converting from MBR to GPT By default, fdisk uses the MBR (Master Boot Record) partition table, which has a 2TB size limit. To use the full capacity of a disk larger than 2TB, you need to use the GPT (GUID Partition Table) scheme instead. Here s how to set up a GPT partition table and create a new partition that spans the entire disk:
Step 1: Backup Any Important Data
If there s any data on the drive, back it up before proceeding. Converting to GPT will delete existing partitions.
Step 2: Use gdisk to Create a GPT Partition Table
Run gdisk to modify the partition table:
Step 3: Convert to GPT and Create a New Partition
Convert to GPT:
Type o and press Enter to create a new GPT partition table. (This will wipe existing partitions on the disk.)
Create a New Partition:
Type n to create a new partition.
Choose the default partition number (usually 1).
Press Enter to accept the default first sector.
Press Enter to accept the default last sector, allowing the partition to use the entire available space.
Write the Changes:
Type w to write the changes to the disk and confirm when prompted.
Step 4: Format the New Partition
Now format the newly created partition, adjusting the following command if needed for a different filesystem:
Step 5: Verify the Partition Size
After formatting, check the partition size: You should now see the partition using the full 3.84TB.
Running Init Scripts on Launch When launching a new virtual machine (VM) on our platform, users have the option to run initialization scripts that automate the installation of packages or custom configurations. This process ensures that the necessary environment is set up as soon as the VM is available for use.
Pre-defined Package Selection At the time of VM creation, users can choose from a list of pre-defined packages that will be installed automatically. This is ideal for users who require commonly used packages and configurations. The common packages we offer are:
NVIDIA Cuda Toolkit
PyTorch
Tensorflow
JupyterLab :::note
Some of our GPU inventory SKUs come with NVIDIA CUDA Toolkit 12.4 pre-installed, for GPU types such as A16, A40, select A100s, and L40S. If you are using one of these SKUs, you may not need to install the CUDA Toolkit manually, but you can still configure additional packages and settings through custom init scripts.
:::
Custom Scripts For advanced configurations, users can add their custom initialization scripts. These scripts are executed at the time of the VM launch, allowing for more flexible and personalized setups. Custom scripts can be used to install specific software, apply security settings, or configure network options.
Monitoring Init Script Execution Even after the VM is marked as Available, the initialization scripts may continue to run in the background. You can monitor the progress of these scripts by running the following command in your terminal: This command allows you to follow the logs of the ongoing installation and configuration processes in real-time. It helps ensure that your custom scripts or package installations are executed correctly.
Best Practices
Ensure that your custom scripts are well-tested to avoid issues during VM startup.
Scripts that include package installations may take time depending on their size, so check the logs to monitor their completion. By using init scripts, you can automate the setup process, saving time and reducing errors in manual configuration.
Installing NVIDIA Drivers (Debian)
Introduction To fully leverage the capabilities of your GPU-accelerated VM within the Ori Global Cloud (OGC) platform, it is essential to install the appropriate NVIDIA drivers. This ensures that your VM can efficiently utilize the underlying hardware for GPU-intensive tasks such as machine learning, deep learning, and high-performance computing. This guide will walk you through the process of installing NVIDIA GPU drivers on your VM.
Prerequisites
Ensure that your VM instance is available.
Verify that you have access to your VM via ssh. Use the Ori provided ssh command on the VM details page.
Installation Steps All the steps are based in the following guide: https://wiki.debian.org/NvidiaGraphicsDrivers Debian 12 .22Bookworm.22
Update repositories: Upgrade all packages to last version: Check if GPu is detected: Install the Nvidia driver compilation prerequisites by using: (https://wiki.debian.org/NvidiaGraphicsDrivers Prerequisites)
Reboot your system with:
Add "contrib", "non-free" and "non-free-firmware" components to /etc/apt/sources.list, for example:
Update the list of available packages, then we can install the nvidia-driver package, plus the necessary firmware:
Reboot your system with:
Now disable the default nouveau GPU driver. To do that, create and open a new configuration file: Add the following lines to the file. Save the changes and exit. In nano, press Ctrl X, then confirm with Y and press Enter:
Rebuild the kernel initramfs with:
Reboot your system with:
Verify it is now working: Troubleshooting If you encounter issues during the driver installation, consider the following troubleshooting steps:
Check the NVIDIA developer forums and knowledge base for solutions to common issues.
Ensure that any previous NVIDIA driver installations are completely removed before attempting a fresh installation.
Additional Resources For more detailed instructions, advanced configurations, and troubleshooting advice, refer to the official NVIDIA documentation available on the NVIDIA developer pages. You can also find support and community discussions that may assist with unique installation scenarios or issues.
Installing NVIDIA Drivers (SXM)
Introduction NVIDIA SXM GPUs are specialized for high-performance computing and deep learning applications, offering superior performance and efficiency compared to standard PCIe GPUs. Installation Steps for Nvidia Cuda 12.5.1 (Nvidia driver 555) on H100/H200 SXM
Ubuntu 24.04 Ubuntu 22.04 Ubuntu 20.04 Verification
After reboot, reconnect to the VM, and verify the driver is installed and its version. Expected output: Verify NVlink is disabled. Expected output: This needs to be N/A, NOT In Progress.
Installing NVIDIA Drivers
Introduction To fully leverage the capabilities of your GPU-accelerated VM within the Ori Global Cloud (OGC) platform, it is essential to install the appropriate NVIDIA drivers. This ensures that your VM can efficiently utilize the underlying hardware for GPU-intensive tasks such as machine learning, deep learning, and high-performance computing. This guide will walk you through the process of installing NVIDIA GPU drivers on your VM.
Prerequisites
Ensure that your VM instance is available.
Verify that you have access to your VM via ssh. Use the Ori provided ssh command on the VM details page.
Installation Steps Prepare the System: Find information about your systems GPU
Check your Linux OS version:
Check your VM CPU architecture version: Follow the NVIDIA developer install instructions: The guide is here, follow the selection process with the information collected above:
https://developer.nvidia.com/cuda-downloads?target os Linux
Input the commands provided on the NVIDIA installation guide Verify the Installation: Once the installation is complete, reboot the VM
Verify the driver installation:
The nvidia-smi utility should display information about the GPU and the installed driver.
Troubleshooting If you encounter issues during the driver installation, consider the following troubleshooting steps:
Check the NVIDIA developer forums and knowledge base for solutions to common issues.
Ensure that any previous NVIDIA driver installations are completely removed before attempting a fresh installation.
Additional Resources For more detailed instructions, advanced configurations, and troubleshooting advice, refer to the official NVIDIA documentation available on the NVIDIA developer pages. You can also find support and community discussions that may assist with unique installation scenarios or issues.
Overview OGC Virtual Machines provide a powerful and flexible way to run AI/ML workloads. GPU Availability - Through OGC you can get the widest variety of GPU types (e.g. V100, V100s, A100, L4, L40s, H100, H100 SXM and more!) and locations across the globe. Fractional GPUs - VMs come in many configurations of GPUs, from small fractions of a GPU (e.g. 1/20th) up to multiple GPUs in a single node. Per Minute Billing - Billing is usage-based and charged per minute. Advanced Features - Ability to Suspend, Resume and Restart machines to better manage usage and optimise spend.
Use Cases
Ideal for large ML inference/training workloads looking to leverage the most powerful GPUs (e.g. 8x H100 SXM).
Suitable for customers looking for flexible pricing or fix term contracts.
Perfect for running PoC and experiments due to on-demand pricing and fractional GPUs.
Resources
Run Ori's Benchmarking framework BefOri (https://github.com/ori-edge/BeFOri).
Read about how Billing (billing.md) works.
Review your available Quotas (https://console.ogc.ori.co/settings/organisation/quotaLimits).
Transferring Data from a VM to External Cloud Storage
Introduction This guide provides instructions for users on how to transfer data from a GPU Virtual Machine (VM) running Ubuntu or Debian to external storage solutions. This can include cloud storage services or personal hardware. The guide also covers how to persist your environment, such as Python and Conda settings, ensuring that your AI/ML software and data are safely backed up.
Prerequisites
Access to a GPU VM running Ubuntu or Debian.
Sufficient permissions to install software and execute commands on the VM.
Access to the destination storage solution (cloud storage credentials or physical storage device).
Step 1: Prepare the VM Ensure your VM is up to date: Install necessary utilities for transferring files (if not already installed): Step 2: Backing Up Your Environment To back up your Python or Conda environment, create an environment file: For Conda environments: For Python virtual environments: This file should be included with your data backup to recreate your working environment later.
Step 3: Transferring Data to External Storage
To Cloud Storage For cloud storage services like AWS S3, Google Cloud Storage, or Azure Blob Storage, first ensure you have the respective CLI tools installed and configured on your VM.
AWS S3 Example
Install AWS CLI:
Configure AWS CLI with your credentials:
Copy data to S3: Google Cloud Storage Example
Install and initialize the Google Cloud SDK:
Copy data to GCS: Step 4: Verifying the Transfer After transferring, verify that all files have been correctly copied to your external storage solution. This can typically be done by comparing file sizes or using checksums for integrity verification.
Transferring VM Data to an External Computer via SCP Secure Copy Protocol (SCP) is a method for securely transferring files between a local host and a remote host or between two remote hosts. This section will guide you on using SCP to transfer data from your GPU VM to an external computer owned by the user.
Prerequisites
SSH access to your VM.
The external computer must have an SSH server running and accessible.
Know the IP address or hostname of the external computer, as well as the username on that system.
Ensure the external computer is reachable over the network from the GPU VM.
Step 1: Prepare the External Computer Ensure SSH Server is Installed and Running: On a Linux or macOS computer, the SSH server is often pre-installed. You can start it with:
On Windows, you might need to enable the "OpenSSH Server" feature from the "Apps features" settings or install it manually. Check the Firewall: Ensure the firewall on the external computer allows incoming connections on the SSH port (default is 22).
Step 2: Backing Up Your Environment To back up your Python or Conda environment, create an environment file: For Conda environments: For Python virtual environments: This file should be included with your data backup to recreate your working environment later.
Step 3: Transfer Files Using SCP Open a Terminal on Your VM. Execute the SCP Command: To transfer a file or directory from the VM to the external computer, use the following command format:
Replace /path/to/local/data with the path to the data on your GPU VM you wish to transfer.
Replace username with your username on the external computer.
Replace external-host with the IP address or hostname of the external computer.
Replace /path/to/destination/folder with the path on the external computer where you want to transfer the data. Example: This command recursively copies the myproject directory from the GPU VM to the data backup directory on the external computer. Authentication: Upon executing the command, you may be prompted to enter the password for the username account on the external computer.
If you use SSH keys for authentication, ensure the private key is available on your GPU VM, and you may need to specify it using the -i option. Verify the Transfer: After the transfer completes, log into your external computer and verify that the files or directories have been successfully copied.
Notes
The -r option is used to recursively copy entire directories. Omit this option if you're transferring a single file.
SCP uses SSH for data transfer, providing a secure channel. Ensure your passwords or private keys are kept secure.
If transferring large amounts of data, consider using a wired connection if possible to speed up the transfer.
import Tabs from ' theme/Tabs';
import TabItem from ' theme/TabItem'; Managing SSH Keys SSH keys are the way to access your compute resources. OGC offers an SSH key management service that enables your to add
and reuse keys across multiple machines.
Step 1: Generating the SSH Key Pair You will first need to generate the SSH key pair (public / private key). The private key will be stored on the customer
side while the public key will be used in OGC. Open Terminal.
Run the command: ssh-keygen -t rsa -b 4096. This creates a new RSA key pair with a 4096-bit length, offering strong security.
Follow the prompts to choose where to save the key and enter a passphrase for added security. Download and install PuTTY, which includes PuTTYgen, from the official site (https://www.putty.org/).
Open PuTTYgen.
Click on Generate and follow the instructions to create a new key. Usually, this involves moving your mouse around the blank area to generate randomness.
Once the key is generated, you will see the public key displayed in the window.
Save the private key to your computer by clicking Save private key . It s advisable to use a passphrase for added security. Step 2: Locating Your SSH Key After generation, your public key will be located in the specified directory (default is usually /.ssh/). The public
key file is typically named id rsa.pub.
Step 3: Using SSH Key With the public key contents(single line of text starting with ssh-rsa), you are now able to provision your compute
resource.
Step 4: Connecting to the machine Open Terminal.
Use the command ssh username ip address -i /path/to/private/key.
If it's your first time connecting, you'll be asked to confirm the server's authenticity. Type yes to continue.
Enter your passphrase if you set one. Open PuTTY.
Enter the Machines's IP address and specify the port (usually port 22 for SSH).
Go to Connection SSH Auth in the PuTTY configuration and load your private key file.
Click Open to initiate the connection.
If prompted, enter the username for your machine and the passphrase for your private key.
Default Open Ports (SXM)
Introduction When deploying the SXM Virtual Machines (VMs), certain network ports are opened by default to facilitate essential services and applications. These ports are specifically chosen to support common use cases and development environments, ensuring smooth operation and accessibility of various tools and services. Below is a list of the ports that are opened by default our H100 V100 SXM Virtual Machines, along with a brief explanation of their purpose:
List of Open Ports
SSH (Port 22/TCP) SSH (Secure Shell) is used for securely connecting to the VM remotely. It provides encrypted communication for accessing and managing the VM's operating system.
HTTP (Port 80/TCP) HTTP is the standard protocol for serving web content. This port is used to serve web applications or APIs that do not require encryption.
HTTPS (Port 443/TCP) HTTPS is used for secure web traffic. It encrypts the communication between the client and server to protect data integrity and privacy.
HTTP Alternative (Port 8080/TCP) Often used as an alternative HTTP port, typically for development or testing environments where multiple web servers might be running on the same machine.
Custom Application (Port 8000/TCP) Port 8000 is commonly used for running development servers, especially for Python-based frameworks like Django.
Jupyter Notebook (Port 8888/TCP) This port is typically used by Jupyter Notebooks, a web-based interactive computing environment for Python, often used in data science.
TensorBoard (Port 6006/TCP) TensorBoard is a tool for visualizing TensorFlow or PyTorch training runs and graph models. It listens on port 6006.
MySQL (Port 3306/TCP) MySQL, a widely used relational database management system, typically operates on port 3306.
PostgreSQL (Port 5432/TCP) PostgreSQL, another popular relational database, listens on port 5432.
MongoDB (Port 27017/TCP) MongoDB, a NoSQL database, uses port 27017 as its default listening port.
HTTPS Alternative (Port 8443/TCP) Port 8443 is often used as an alternative to port 443 for secure web traffic, especially in environments where multiple services require SSL/TLS encryption. By pre-opening these ports, we ensure that users have immediate access to critical services like SSH, HTTP/HTTPS, databases, and popular development tools like Jupyter Notebooks and TensorBoard without additional configuration. Moreover, these ports are only opened on SXM Virtual Machines to maintain a controlled and secure environment.
Support If you encounter any issues or need help to open ports on your SXM VMs, our support team is here to help.
Raise a support ticket here (https://oriindustries.atlassian.net/servicedesk/customer/portals).
|
- Downloads last month
- 17