Skip to content

System MKS Template

Overview

This documentation provides an overview of system template for Rafay Managed Kubernetes Clusters.These templates are designed to simplify the provisioning, configuration, and management of Kubernetes clusters.

Intial Setup

The platform team is responsible for performing the initial configuration and setup of the MKS template. The sequence diagram below outlines the high-level steps. In this process, the platform team will configure and share the template from the system catalog to the project they manage and then share the template downstream with the end user.

sequenceDiagram
    participant Admin as Platform Admin
    participant Catalog as System Catalog
    participant Project as End User Project

    Admin->>Catalog: Selects MKS Template from System Catalog
    Admin->>Project: Shares Template with Predefined Controls
    Project-->>Admin: Template Available in End User's Project

End User Flow

The end user launches a shared template, provides required input values, and deploys the cluster.

sequenceDiagram
    participant User as End User
    participant Project as Rafay Project
    participant Cluster as Rafay Managed Kubernetes Cluster

    User->>Project: Launches Shared Template for MKS
    User->>Project: Provides Required Input Values (API Key, Node Configuration, SSH Details)
    User->>Project: Clicks "Deploy"
    Project->>Cluster: Provisions a Rafay Managed Kubernetes Cluster on the specified nodes
    Cluster-->>User: Cluster Deployed Successfully
    Cluster-->>User: Provides Kubeconfig File as Output

This system template allows you to configure, templatize, and provision a Rafay Managed Kubernetes Cluster (Rafay MKS) on any supported operating system. For more details, refer to this document.

The templates are designed to support both:

  • Day 0 operations: Initial setup
  • Day 2 operations: Ongoing management

Infrastructure Types

The template supports provisioning Rafay MKS clusters on various infrastructure types, including:

  • Bare Metal: Users manage the lifecycle of hardware and the operating system.
  • Virtual Machines: Supports Bring Your Own OS or pre-packaged images (e.g., QCOW2, OVA formats).
  • Public Cloud: Flexible deployments on cloud infrastructure.

Key Capabilities

This template enables users to:

  • Provision and manage the lifecycle of Rafay Managed Kubernetes Clusters.
  • Configure:
    • Container Network Interface (CNI)
    • Add-ons defined in the cluster blueprint.

As part of the output, users receive a kubeconfig file with cluster-wide privileges for secure access.

Resources

This system template will deploy the following resources:

  • Upstream Kubernetes on the specified nodes.

Pre-Requisites

  1. Underlying Infrastructure:
    Ensure the required infrastructure is available for deploying the cluster.Refer to Infrastructure Requirements.

  2. Agent Configuration:
    An agent must be configured in the project where the template will be used.Follow these instructions to deploy an agent: Agent Deployment Guide.Existing agents can also be reused.

  3. Rafay Configuration:

    • Specify the API key of the controller for API Key input variable.
  4. SSH Key:

    • Provide the SSH key of the node to run the installer for Kubernetes deployment.

Configuration

  1. Node Information:
    • Specify details for:
      • Control Plane Node(s)
      • Worker Node(s)

Node Configuration


Input Variables for MKS System Template

Name Default Value Value Type Description
System Components Placement { "node_selector": {}, "tolerations": [] } JSON Enter node selectors and tolerations for the cluster.
High Availability (HA) false Text Allowed: [true, false] Select if HA should be enabled.
Cluster Name $(environment.name)$ Expressions Enter the name of the Upstream Kubernetes cluster.
Cluster Kubernetes Version v1.30.4 Text Allowed: [v1.28.13, v1.29.8, v1.30.4, v1.31.0] Select the Kubernetes version for the cluster.
Network { "cni":{"name":"Calico","version":"3.26.1"}, "pod_subnet":"10.244.0.0/16", "service_subnet":"10.96.0.0/12" } JSON Enter the network information.
Control Plane Node(s) { "hostname-1": { "arch": "amd64", "hostname": "hostname-1", "private_ip": "10.1.0.67", "operating_system": "Ubuntu22.04", "roles": ["ControlPlane", "Worker"], "ssh": { "ip_address": "129.146.178.0", "port": "22", "private_key_path": "ssh-key.pem", "username": "ubuntu" } } } JSON Provide the control plane node information. If not using cloud credentials, specify the SSH key. The variable should match the node's hostname (e.g., 'hostname-1' for the node 'hostname-1').
Worker Node(s) { "worker-1": { "arch": "amd64", "hostname": "worker-1", "private_ip": "10.1.0.68", "operating_system": "Ubuntu22.0", "roles": ["Worker"], "ssh": { "ip_address": "129.146.178.1", "port": "22", "private_key_path": "key/to/ssh/path", "username": "ubuntu" } } } JSON Provide the worker node information. If not using cloud credentials, specify the SSH key. The variable should match the node's hostname (e.g., 'worker-1' for the node 'worker-1').
Cloud Credentials upstream-cloud-credential Text Enter the cloud credentials. Leave this field empty to use the SSH key.
Kubernetes Upgrade { "strategy":"sequential", "params":{"worker_concurrency":"50%"} } JSON Enter the upgrade strategy for the cluster.
Cluster Labels { "env": "dev", "release": "stable" } JSON Enter any labels to assign to the cluster.
Blueprint Name default Text Enter the name of the blueprint assigned to the cluster.
Cluster Location sanjose-us Text Enter the location label where the cluster will be deployed.
Blueprint Version latest Text Specify the version of the blueprint for the cluster. For system blueprints, use "latest".
Cluster Project $(environment.project.name)$ Expressions Enter the project for the Upstream cluster.
Auto Approve Nodes true Text Allowed: [true, false] Select if nodes should be auto-approved.
Cluster Dedicated Control Planes* false Text Allowed: [true, false] Select if dedicated control planes should be enabled.
Proxy Config { "proxy_config": { "default": { "enabled": true, "allow_insecure_bootstrap": true, "bootstrap_ca": "cert", "http_proxy": "http://proxy.example.com:8080/", "https_proxy": "https://proxy.example.com:8080/", "no_proxy": "10.96.0.0/12,10.244.0.0/16", "proxy_auth": "proxyauth" } } } JSON Enter the proxy configuration details.
Rest Endpoint console.rafay.dev Text Select the endpoint of the controller.
API Key Text Enter the API key of the controller.
SSH Key ssh-key.pem Text Enter the SSH key for the node.

Launch Time

The estimated time to launch an MKS cluster using this template is approximately 15 to 20 minutes.