System vSphere Template
Overview¶
This documentation provides an overview of system template for Rafay Managed Kubernetes Clusters on VMware vSphere .These templates are designed to simplify the provisioning, configuration, and management of Kubernetes clusters on VMware vSphere.
Intial Setup¶
The platform team is responsible for performing the initial configuration and setup of the MKS on vSphere template. The sequence diagram below outlines the high-level steps. In this process, the platform team will configure and share the template from the system catalog to the project they manage and then share the template downstream with the end user.
sequenceDiagram
participant Admin as Platform Admin
participant Catalog as System Catalog
participant Project as End User Project
Admin->>Catalog: Selects MKS on vSphere Template from System Catalog
Admin->>Project: Shares Template with Predefined Controls
Project-->>Admin: Template Available in End User's Project
End User Flow¶
The end user launches a shared template, provides required input values, and deploys the cluster.
sequenceDiagram
participant User as End User
participant Project as Rafay Project
participant Infra as VMware vSphere Infra
User->>Project: Launch Shared Template for MKS on VMware vSphere
User->>Project: Provide Required Input Values
note right of User: Input values include:<br>API Key,<br>Node Configuration<br>(SSH Key, Authorized Key),<br>vSphere Details<br>(Datacenter, Network, Datastore,<br>Server Address, Compute Cluster,<br>VM Template)
User->>Project: Click "Deploy"
Project->>Infra: Deploy Virtual Machines and Prepare Nodes<br>for Kubernetes Deployment
Project->>Infra: Provision Rafay Managed Kubernetes Cluster<br>on the deployed nodes
Infra-->>User: Cluster Deployment Successful
This system template allows you to configure, templatize, and provision a Rafay Managed Kubernetes Cluster (Rafay MKS) on VMware vSphere
The templates are designed to support both:
- Day 0 operations: Initial setup
- Day 2 operations: Ongoing management like k8s upgrades , addition of new nodes etc.
Key Capabilities¶
This template enables users to:
- Deploy Virtual Machines on VMware vSphere based on the provided configuration.
- Provision and manage the lifecycle of Rafay Managed Kubernetes Clusters on VMware vSphere Environment.
- Configure:
- Container Network Interface (CNI)
- Add-ons defined in the cluster blueprint.
Resources¶
This system template will deploy the following resources:
- Virtual Machines on the VMware vSphere Infra based on the provided configuration.
- Upstream Kubernetes on the deployed VM nodes.
Pre-Requisites¶
-
Access to VMware vSphere Infra:
Ensure you have access to VMware vSphere with the following details:- Datacenter
- Network
- Datastore
- Compute Cluster
- vSphere Server Address
- vSphere Username
- vSphere Password
- Private Key: Used for accessing the nodes.
- Public Authorized Key: Used for remote SSH access to the VMs.
- VM Template: The template to be used for creating VMs on VMware vSphere.
-
Agent Configuration:
An agent must be configured in the project where the template will be used.Follow these instructions to deploy an agent: Agent Deployment Guide. Existing agents can also be reused. -
Rafay Configuration:
- Specify the API key of the controller for
API Key
input variable.
- Specify the API key of the controller for
Input Variables for MKS on VMware vSphere System Template¶
Name | Default Value | Description | Value Type |
---|---|---|---|
Worker VM Memory | 64 |
Amount of memory [GiB] per worker VM | Text |
Worker VM Prefix | $(environment.name)$-w |
Prefix for worker virtual machine names | Expressions |
Cluster Blueprint | minimal |
Blueprint to be added to the cluster | Text |
Worker VM Count | 1 |
Number of worker VMs to create | Text |
Worker VM Disk Data Size | 30 |
Additional storage device configured for the VM [GiB] | Text |
System Components Placement | {"node_selector":{},"tolerations":[]} |
Placement settings for system components | JSON |
Worker VM CPU | 8 |
Number of CPUs per worker VM | Text |
vSphere User | rafay |
vSphere username for authentication | Text |
Controlplane VM Disk Data Size | 30 |
Additional storage device configured for the Control Plane VM [GiB] | Text |
Auto Approve Nodes | true |
Automatically approve nodes (Allowed: [true, false]) | Text |
Controlplane VM Prefix | $(environment.name)$-cp |
Prefix for Control Plane virtual machine names | Expressions |
vSphere Compute Cluster | Cluster1 |
vSphere compute cluster where virtual machines will be created | Text |
vSphere VM Template | vm-agent-template |
Template name for creating virtual machines | Text |
Cluster Blueprint Version | latest |
Blueprint version | Text |
vSphere Network Worker | rafay |
Configure vSphere network in the selected data center for Worker Node VMs | Text |
Controlplane VM Memory | 16 |
Amount of memory [GiB] per Control Plane VM | Text |
vSphere Password | vSphere password for authentication | Text | |
Controlplane VM CPU | 4 |
Number of CPUs per Control Plane VM | Text |
VM Operating System | Ubuntu-22.04 |
Operating system of the VM | Text |
vSphere Resource Pool | ovhServers |
The vSphere resource pool to use for VM deployment. Set to an empty Text to use the default cluster pool. | Text |
vSphere Server | pcc-147-135-35-53.ovh.us |
The vCenter server IP or FQDN | Text |
Cluster Project | $(environment.project.name)$ |
Name of the project | Expressions |
Cluster Network | {"cni":{"name":"Calico","version":"3.26.1"},"pod_subnet":"10.244.0.0/16","service_subnet":"10.96.0.0/12"} |
The network configuration | JSON |
Cloud Credentials | Upstream cloud credentials | Text | |
vSphere Network Controlplane | rafay |
Configure vSphere network in the selected data center for Control Plane VMs | Text |
vSphere Datastore | ssd-001870 |
Datastore where virtual machines will reside | Text |
Cluster Dedicated Controlplanes* | false |
Enable dedicated control planes (Allowed: [true, false]) | Text |
Proxy Config | {} |
Configure proxy if your infrastructure uses an outbound proxy | JSON |
Cluster Kubernetes Version | v1.30.4 |
Version of Kubernetes (Allowed: [v1.28.13, v1.29.8, v1.30.4, v1.31.0]) | Text |
Worker VM OS Disk Size | 50 |
Primary storage device for the Worker VM [GiB] | Text |
Cluster HA | false |
Enable high availability (Allowed: [true, false]) | Text |
vSphere Worker Folder | $(environment.name)$-worker |
vSphere folder where Worker VMs will be organized | Expressions |
Kubernetes Upgrade | {"params":{"worker_concurrency":"50%"},"strategy":"sequential"} |
Kubernetes upgrade strategy and parameters | JSON |
Controlplane VM Disk OS Size | 50 |
Primary storage device for the Control Plane VM [GiB] | Text |
vSphere Controlplane Folder | $(environment.name)$-controlplane |
vSphere folder where the Control Plane VMs will be organized | Text |
VM Username | ubuntu |
VM username for authentication | Text |
vSphere Storage Policy | The vSphere storage policy. Set to an empty Text if not using a storage policy | Text | |
Cluster Labels | {"env":"dev","release":"stable"} |
Labels for the cluster | JSON |
vSphere Datacenter | pcc-147-135-35-53_datacenter1145 |
vSphere data center to deploy virtual machines | Text |
Controlplane VM Count | 1 |
Number of Control Plane VMs to create | Text |
Cluster Location | sanjose-us |
Location of the cluster | Text |
Cluster Name | $(environment.name)$ |
Name of the cluster | Expressions |
API Key | Enter the API key of the controller | Text | |
Rest Endpoint | console.rafay.dev |
Select the endpoint of the controller | Text |
private-key | SSH private key for virtual machine access | Text | |
authorized-key | Public key to configure remote SSH access to the nodes | Text |
Launch Time¶
The estimated time to launch an MKS cluster using this template is approximately 15 to 20 minutes.