Skip to content

v4.0 Update 4 - SaaS

25 Feb, 2026

Upstream Kubernetes for Bare Metal and VMs

The features in this section are for Rafay's Kubernetes Distribution (aka Rafay MKS).

Kubernetes v1.35

New Rafay MKS clusters based on upstream Kubernetes can now be provisioned with Kubernetes v1.35. Existing clusters managed by the controller can be upgraded in-place to Kubernetes v1.35.

Patch version updates

  • 1.34.3, 1.33.7, and 1.32.7 – New patch versions for Kubernetes 1.34, 1.33, and 1.32.
  • Platform version 1.2.0 (latest) – Use with these patch versions (includes etcd 3.5.24, required by upstream). Older platform versions (v1.1.0, v1.0.0, v0.1.0) are deprecated but available when Show deprecated platform versions is enabled for migration purposes.
  • Older Kubernetes patch versions (e.g., v1.34.1, v1.33.5, v1.32.9) remain visible when Show deprecated Kubernetes patch versions is enabled.
Kubernetes version Platform version
1.35 1.2.0
1.34.3 1.2.0
1.33.7 1.2.0
1.32.11 1.2.0

Default versions

By default, only the latest Kubernetes patch versions (v1.35.0, v1.34.3, v1.33.7, v1.32.11) and platform version v1.2.0 are shown in the cluster provisioning UI.

Default versions

When deprecated versions are enabled

When Show deprecated Kubernetes patch versions or Show deprecated platform versions is enabled, older versions are listed for migration purposes. Using the latest patch versions is recommended.

Note

For now older deprecated patch versions are supported for new cluster creation (for each minor version). In the next release, we will also allow users to select deprecated patch versions when upgrading clusters to support migration.

Kubernetes versions with deprecated options

Deprecated Kubernetes versions

Platform versions with deprecated options

Deprecated platform versions

For the full Kubernetes and platform version support matrix, see Support Matrix.

Azure AKS

Node Auto-Provisioning Configuration

Node auto-provisioning configuration support is now available for Azure AKS clusters. This release includes support in:

  • Terraform AKS v3 cluster resource
  • rctl
  • GitOps

Info

UI support for node auto-provisioning configuration will be added in the next release.

The following example shows the cluster config structure for node auto-provisioning:

apiVersion: infra.k8smgmt.io/v3
kind: Cluster
metadata:
  name: <cluster-name>
  project: <project-name>
spec:
  blueprintConfig:
    name: minimal
  cloudCredentials: <cloud-credentials>
  type: aks
  config:
    kind: aksClusterConfig
    metadata:
      name: <cluster-name>
    spec:
      managedCluster:
        apiVersion: '2024-01-01'
        identity:
          type: SystemAssigned
        location: <region>
        properties:
          # Required for NAP (Node Auto-Provisioning)
          nodeProvisioningProfile:
            mode: Auto
            defaultNodePools: Auto
          apiServerAccessProfile:
            enablePrivateCluster: false
            enablePrivateClusterPublicFQDN: false
          autoUpgradeProfile:
            upgradeChannel: none
            nodeOsUpgradeChannel: None
          disableLocalAccounts: false
          dnsPrefix: <cluster-name>-dns
          enablePodSecurityPolicy: false
          enableRBAC: true
          kubernetesVersion: "1.33.x"
          networkProfile:
            dnsServiceIP: 10.0.0.10
            loadBalancerSku: standard
            networkPlugin: azure
            serviceCidr: 10.0.0.0/16
        sku:
          name: Base
          tier: Free
        type: Microsoft.ContainerService/managedClusters
      nodePools:
        - type: Microsoft.ContainerService/managedClusters/agentPools
          apiVersion: '2024-01-01'
          name: pool1
          properties:
            count: 1
            maxPods: 110
            mode: System
            orchestratorVersion: "1.33.x"
            osType: Linux
            type: VirtualMachineScaleSets
            vmSize: Standard_B4ms
            vnetSubnetID: <vnet-subnet-id>
      resourceGroupName: <resource-group-name>

The nodeProvisioningProfile with mode: Auto and defaultNodePools: Auto is the configuration required to enable NAP.

Limitations

The following are Azure NAP limitations as documented in the Azure AKS Node Auto-Provisioning overview:

  • NAP cannot be enabled on clusters with the autoscaler enabled.
  • Windows node pools are not supported.
  • IPv6 clusters are not supported.
  • DiskEncryptionSetId is not supported.
  • HTTP proxy configuration is not supported.
  • Cluster Stop is not supported.
  • Day-2 updates for egress outbound-type are not supported.
  • When using a custom vNet, loadBalancerSku must be set to Standard.

NAP and Autoscaling

NAP cannot be enabled when autoscaling is enabled on a nodepool. Rafay does not support disabling autoscaling during Day-2 operations. To use NAP, delete the existing nodepool and create a new one configured for NAP.

Kubernetes 1.34 Support

AKS clusters can now be provisioned with Kubernetes 1.34. Existing AKS clusters can be upgraded in-place to Kubernetes 1.34.

AKS cluster with Kubernetes 1.34

Google GKE

  • GKE clusters can now be provisioned with Kubernetes 1.34. Existing GKE clusters can be upgraded in-place to Kubernetes 1.34.
  • Kubernetes 1.31 has been removed as it has reached end of life (EOL).

The following bug fixes have been addressed in this release:

Bug ID Description
RC-47187 Fixed custom blueprints not appearing in project selection
RC-46894 Fixed rctl namespace creation error
RC-46906 Fixed addon overrides sharing issue after GitOps updates
RC-47118 Fixed AKS cluster conversion to managed when nodepool has empty kubeletConfig

v4.0 Update 3 - SaaS

17 Feb, 2026

Azure AKS

Cluster LCM

Enhanced configuration for Azure AKS cluster lifecycle management. The following new configuration enhancements have been added across all supported interfaces and the Terraform AKS v3 resource:

  • Azure Web Application Routing addon – Managed Web Application Routing addon
  • Azure Istio service mesh addon – Managed Istio service mesh addon
  • Key Vault Secret Provider CSI Driver – Azure Key Vault Secret Provider CSI Driver
  • Custom kubelet config – Custom kubelet configuration
  • Proxy configurationhttp_proxy, https_proxy, and no_proxy settings
  • Snapshot support – Image snapshot support
  • Network dataplane – Cilium dataplane
  • Network policy – Cilium network policy

Proxy configuration update

When updating the proxy configuration of an AKS cluster, changes are applied to the cluster specification but will not take effect on workloads immediately. Required actions after proxy update: Save and apply the updated AKS cluster configuration, then update (or re-publish) the associated Blueprint.

The following example shows the cluster config structure with these enhancements applied:

spec:
  clusterConfig:
    spec:
      managedCluster:
        properties:
          addonProfiles:
            azureKeyvaultSecretsProvider:          # Key Vault Secret Provider CSI Driver
              config:
                enableSecretRotation: "true"
                rotationPollInterval: 2m
              enabled: true
          httpProxyConfig:                         # Proxy configuration
            httpProxy: http://proxy.example.com:443/
            httpsProxy: http://proxy.example.com:443/
            noProxy:
              - 10.0.0.0/16
              - localhost
              - 127.0.0.1
          ingressProfile:                          # Web Application Routing addon
            webAppRouting:
              dnsZoneResourceIds:
                - /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Network/dnsZones/<zone>
          networkProfile:                          # Cilium dataplane and network policy
            networkDataplane: cilium
            networkPlugin: azure
            networkPolicy: cilium
          serviceMeshProfile:                      # Istio service mesh addon
            istio:
              components:
                ingressGateways:
                  - enabled: true
                    mode: Internal
              revisions:
                - asm-1-26
            mode: Istio
      nodePools:
        - name: primary
          properties:
            kubeletConfig:                        # Custom kubelet config
              containerLogMaxFiles: 2
        - name: snapshot-pool
          properties:
            creationData:                         # Snapshot support
              sourceResourceId: /subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.ContainerService/snapshots/<snapshot-name>

Ingress ProfileKey elements: Enable Ingress Profile, Enable Web App Routing, DNS Zone Resource IDs

Ingress Profile - Enable Web App Routing addon and DNS Zone configuration

Service Mesh ProfileKey elements: Enable Service Mesh Profile, Istio mode, Egress/Ingress Gateways, Key Vault certificate integration

Service Mesh Profile - Istio addon configuration

Node Pool ConfigurationKey elements: Snapshot ID, Enable Kubelet Configuration, sysctls, container logging, CPU management, Image GC thresholds

Node pool - Snapshot support and custom kubelet configuration

Import and Take Over

Bug fixes have been applied to improve the reliability of the AKS cluster import and take over flow.

Environment Manager

Enhance UX for JSON-based input variables when building self-service templates:

  • Template creation: Platform engineers get JSON and regex pattern validation when creating templates. Use the Validate JSON button and define patterns with patternExample in the JSON Schema.

Template creation - Validate JSON and JSON Schema with regex pattern

  • Environment launch: End users get validation and example help text when providing input values. If a value doesn't match the regex pattern, the UI shows the expected format (e.g., "Example: APMS-00000") to reduce errors and enable a self-sufficient workflow.

Template launch - Regex pattern help text for invalid values

The following bug fixes have been addressed in this release:

Bug ID Description
RC-46741 Cluster overrides: Day 2 operations: Repository dropdown appears empty when pulling files, even when repositories exist in the project
RC-46065 Cluster overrides: Day 2 operations: Workloads not appearing in Resource Type dropdown when using Select from List
RC-46948 Cluster platform upgrade: Existing cronjobs were being removed during upgrade
RC-46857 Cluster platform upgrade: Upgrade failures on large clusters
RC-46813 Cluster upgrade: GitOps-created clusters with annotations fail with conflicting tasks during upgrade
RC-46858 GKE: Cluster spec gets deleted; manual pipeline trigger also fails
RC-46887 Pipeline: Pipeline runs fail with invalid argument error
RC-46886 CD Agent: Failed to fetch origin error

v4.0 Update 2 - SaaS

10 Feb, 2026

The following bug fixes have been addressed in this release:

Bug ID Description
RC-46705 Workload deploy status incorrectly shows pending when Ready state is success
RC-46775 Git repository CA certificate settings disappear after a system sync from Git to System
RC-46585 Blueprint sync remains in progress when Alert type taskSet fails to deploy
RC-46739 UI: Unable to view or download values file for Helm 3 workload templates (Helm Repo and Catalog)
RC-46485 Day 2: Consul member unable to rejoin cluster after node reboot

v4.0 Update 1 - SaaS

05 Feb, 2026

Azure AKS

Import and Convert to Managed

The following enhancements for AKS Import and AKS Convert to Managed are included in this release:

  • Private DNS zone – Support for private DNS zone configuration
  • Proxy configurationhttp_proxy, https_proxy, and no_proxy settings
  • Node image – Ubuntu as the node image option
  • Custom kubelet config – Custom kubelet configuration support
  • Azure Web Application Routing addon – Managed Web Application Routing addon
  • Azure Istio service mesh addon – Managed Istio service mesh addon
  • Key Vault Secret Provider CSI Driver – Azure Key Vault Secret Provider CSI Driver support
  • Snapshot ID – Image snapshot ID.

For more details and an example configuration refer to New Enhancements for Import and Convert to Managed.

Bug Fix

The following bug fix has been addressed in this release:

Bug ID Description
RC-46557 EKS: Control plane upgrade fails via GitOps due to conflicting provision tasks