Skip to content

Installation

Introduction

Installing and configuring the Controller using Helm involves a series of steps to ensure proper deployment within a Kubernetes environment. This procedure includes updating the values.yaml file with crucial parameters such as domain names, high availability settings, and API configurations.

This procedure helps users deploy the Controller efficiently on any environment running Kubernetes, including EKS, GKE, or Kubernetes clusters running on virtual machines (VMs) or bare metal. It provides detailed instructions for customizing deployment settings, managing Helm charts, and ensuring that the Controller is correctly configured for the user's infrastructure. Users can simplify the deployment, setup, upgrade, or uninstallation process of the Controller as needed.

  • Supported platforms include EKS, GKE, and any Kubernetes clusters running in VMs or on bare metal.

Important

Currently, the Helm-based method to deploy the Rafay controller only supports deploying PostgreSQL locally on the cluster and does not support external databases like RDS. Additionally, it does not support custom registries (ECR/JFrog) that have authentication enabled.


Prerequisites

  • Ensure that kubectl and Helm 3 are installed
  • Access to the kubeconfig file of the target Kubernetes cluster, with the ability to interact with the cluster using kubectl commands
  • Ensure the tar extraction utility is installed on the system
  • A Kubernetes cluster with nodes having 32 vCPUs, 64 GB memory, and 500 GB root storage
  • Wildcard DNS records created in the respective domain, including various wildcard entries: Refer to DNS Record Creation for more information
  • Availability of a default labeled StorageClass is required. If no StorageClass is available in the cluster, one can be installed using any storage engine (e.g., OpenEBS)
  • Run the following commands on the node where the Rafay Controller will be installed:
    • sudo iptables -F
    • sudo sysctl fs.inotify.max_user_instances=8192

Installation

These instructions are distribution-agnostic. Modify as needed based on your infrastructure provider and Kubernetes distribution. Contact support if you require assistance.

  1. Contact the support team to obtain the latest version of the Rafay Controller helm package, along with the cluster images and assets required for cluster lifecycle management (LCM) using the Rafay Controller

  2. Extract the package using the following command:

    tar -xvf rafay-helm-controller-<version>.tar.gz
    

  3. Once extracted, update the Configuration Required and modify the values.yaml file

  4. Install the Controller dependencies:

    Important

    Make sure to configure the values.yaml file with the necessary parameters before moving to further installation steps.

    helm install rafay-dep -f values.yaml ./rafay-dep-<version>.tgz
    

    Important

    Wait for all pods to be in the Running state. Use the command kubectl get pods -A to check if all the pods are running. Once all pods are running, proceed to Step 5.

    Optional: Load Balancer (LB) External IP

    Run the following command to retrieve the Load Balancer (LB) External IP associated with the cluster:

    kubectl get svc -n istio-system istio-ingressgateway
    
    Once you have obtained the Load Balancer (LB) External IP, update the DNS management page for the A record console.<domain>. This can be either the Load Balancer IP or the direct IP of the Kubernetes cluster where the controller application is installed depending on the configuration. For detailed instructions on creating and updating DNS records, refer to the DNS Record Reference.

  5. To install the Controller, run:

    helm install rafay-core -n rafay-core -f values.yaml ./rafaycore-<version>.tgz
    

    Important

    Wait for all pods to be in the Running state. Use the command kubectl get pods -n rafay-core to check if all the pods are running. Once all pods are running, proceed to Step 6.

  6. To install the Controller access rules, run the below command:

    helm install istio-rules -n istio-system -f values.yaml ./istio-rules-<version>.tgz
    
    Wait for a minute or two to configure the resources properly.

  7. Access the Console in the web browser:

    https://console.<default_partner_console_domain>
    

  8. The system is now ready to push the cluster images to begin cluster provisioning. Extract the downloaded Rafay cluster package:

    tar -xvf rafay-helm-cluster-<version>.tar.gz
    

  9. Grant executable permissions to the radm file:

    chmod +x radm
    

  10. Push the Rafay cluster images to the Nexus registry. Ensure that all Pods, especially Nexus and admin-api, are in a running state before executing the following command

    sudo ./radm cluster --fqdn <default_partner_console_domain> --unarchive /tmp
    

    • fqdn: Provide the default partner console domain
    • unarchive (Optional): Use this option to extract the cluster packages. By default, the cluster package will be extracted to /tmp

Uninstall

Follow the steps below for a successful installation. Wait for all pods in the "Terminating" state to exit before running the next command:

helm uninstall istio-rules -n istio-system
helm uninstall rafay-core -n rafay-core
helm uninstall rafay-dep

Note: Ensure that Elastic CRDs are deleted if present.


Upgrade

Use the following commands to upgrade the respective Helm charts:

helm upgrade rafay-dep -f values.yaml ./rafay-dep-<version>.tgz
helm upgrade rafay-core -n rafay-core -f values.yaml ./rafaycore-<version>.tgz
helm upgrade istio-rules -n istio-system -f values.yaml ./istio-rules-<version>.tgz