Skip to content

Building Env Templates

This section offers examples for designing and developing environment templates using the Rafay Environment Manager framework. These examples showcase the framework's capabilities, enabling users to create complex environments tailored to the organization's workflow requirements.


Example Templates

Building a Single Resource Template

We will start off building an environment template to create a VPC in an AWS region. This approach will leverage the Rafay’s GitOps, System sync feature to load all the templates from the Git to the Rafay’s controller.

Prerequisites for GitOps first approach of development/testing the template :

  1. Access to the Rafay Console through an Org
  2. User API/Token if you use APIs for repository, pipeline and agent creation
  3. Creation of the following (one-time exercise) using one of Rafay’s interfaces
  4. An Agent instance to run the code that’s backing the resource templates

    Refer to Rafay RCTL documentation for pipeline creation and management

  5. Create repository with name envmgr, your GitHub repository and branch details in Rafay console with Github Account’s username and Token

    Refer to Rafay RCTL documentation for repository creation and management

  6. Create a secret

  7. Gitops pipeline instance in organization -> project for loading entities from git to system (organization -> project)

Refer to Rafay RCTL documentation for pipeline creation and management

In this example, all of the building blocks including templates, config context, agents will be part of the “dev-project” project.

We will be creating the below folder structure in the git in the branch dev-guide-templates.

Code structure in repository “envmgr”

├── rafay-resources
│   ├── projects
│   │   ├── dev-project
│   │   │   ├── configcontexts
│   │   │   │   ├── artifacts/rafay-config-context/sealed-secret.yaml
│   │   │   │   ├── RafayConfigContext.yaml
│   │   │   ├── environmenttemplates
│   │   │   │   ├── EnvironmentTemplate.yaml
│   │   │   ├── resourcetemplates
│   │   │   │   ├── VPCResourceTemplate.yaml
├── terraform
│   ├── vpc
│   │   ├── main.tf
│   │   ├── output.tf
│   │   ├── provider.tf
│   │   ├── variable.tf

Building Config Context with secrets data

  • Install the Kubeseal utility in the system
  • Download the Secret Sealer Certificate
  • Create a secret yaml file (that contains sensitive data to be encrypted) like below
apiVersion: v1
data:
 RCTL_API_KEY: <base64 encoded value of RCTL_API_KEY>
 aws_access_key_id: <base64 encoded value of aws assess key>
 aws_secret_access_key: <base64 encoded value of aws secret key>
kind: Secret
metadata:
 name: rafay-config-context
 namespace: default
  • Run the command to generate encrypted data
kubeseal --cert CERT--format yaml < secret.yaml > sealsecret.yaml
1
2
3
  **CERT:** Certificate downloaded from the Secret Sealer  
  **secret.yaml:** Kubernetes secret file  
  **sealsecret.yaml:** The generated output file name with encrypted values
  • On successful command execution, users can view the generated output file sealsecret.yaml where the data is encrypted/sealed

Below is the generated output file which needs to be copied under
rafay-resources/projects/dev-project/configcontext/artifacts/rafay-config-context/sealed-secret.yaml

apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
creationTimestamp: null
name: rafay-config-context
namespace: default
spec:
encryptedData:
 RCTL_API_KEY: <encrypted data of RCTL_API_KEY>
 aws_access_key_id: <encrypted data of aws_access_key_id>
 aws_secret_access_key: <encrypted data of aws_secret_access_key>
template:
  data: null
  metadata:
    creationTimestamp: null
    name: rafay-config-context
    namespace: default

rafay-resources/projects/dev-project/configcontext/RafayConfigContext.yaml

apiVersion: eaas.envmgmt.io/v1
kind: ConfigContext
metadata:
  name: rafay-config-context
  project: dev-project
spec:
  envs:
   # Here we are defining the RCTL config in config context, to be used in multiple environment templates
    - key: RCTL_API_KEY
      options:
        sensitive: true
      value: sealed://RCTL_API_KEY
      valueType: text
    - key: RCTL_REST_ENDPOINT
      value: console.rafay.dev
      valueType: text
    - key: RCTL_PROJECT
      value: dev-project
      valueType: text
  variables:
     # Here we are defining the AWS credentials in config context, to be used in multiple environment templates
    - name: aws_access_key_id
      valueType: text
      options:
        sensitive: true
        override:
          type: allowed
      value: sealed://aws_access_key_id
    - name: aws_secret_access_key
      valueType: text
      options:
        sensitive: true
        override:
          type: allowed
      value: sealed://aws_secret_access_key
    - name: aws_region
      valueType: text
      value: us-west-2
      options:
        override:
          type: allowed
  secret:
    name: file://artifacts/rafay-config-context/sealed-secret.yaml

Rafay-resources/projects/dev-project/enviromenttemplates/EnvironmentTemplate.yaml

apiVersion: eaas.envmgmt.io/v1
kind: EnvironmentTemplate
metadata:
  name: app-environment-template
  project: dev-project
spec:
  # Here we are associating an agent which will be used for running code of all involved resource templates, in this case being VPCResourcetemplate
  agents:
    - name: dev-agent
  contexts:
    # Here we are associating a configcontext that contains information related to AWS credentials. Reason a configcontext is used here , so that the same can be associated to multiple other env templates
    - name: rafay-config-context
  resources:
  # Here we are associating first resource template VPCResourceTemplate and its version.
  - kind: resourcetemplate
    name: vpc-resource-template
    resourceOptions:
      version: v1
    type: dynamic
  version: v1
  versionState: active

Rafay-resources/projects/dev-project/resourcetemplate/VPCResourceTemplate.yaml

apiVersion: eaas.envmgmt.io/v1
kind: ResourceTemplate
metadata:
  name: vpc-resource-template
  project: dev-project
spec:
  # Here we are defining provider as opentofu to use, if no providerOptions is mentioned then Rafay will use latest version
  provider: opentofu
  providerOptions:
    openTofu:
      refresh: true
      backendType: system
  repositoryOptions:
    # Here we are associating code in repository to the resource template
    name: envmgr
    branch: dev-guide-templates
    directoryPath: development-guide-templates/terraform/vpc
  variables:
    - name: vpc_name
      valueType: text
      value: example-vpc
      options:
        description: VPC name
        override:
          type: allowed
    - name: vpc_cidr
      valueType: text
      value: 10.0.0.0/16
      options:
        description: CIDR block
        override:
          type: allowed
  version: v1

terraform/vpc/main.tf

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = var.vpc_name
  cidr = var.vpc_cidr

  azs             = ["${var.aws_region}a", "${var.aws_region}b", "${var.aws_region}c"]
  private_subnets = [cidrsubnet(var.vpc_cidr, 8, 1), cidrsubnet(var.vpc_cidr, 8, 2), cidrsubnet(var.vpc_cidr, 8, 3)]
  public_subnets  = [cidrsubnet(var.vpc_cidr, 8, 100), cidrsubnet(var.vpc_cidr, 8, 101), cidrsubnet(var.vpc_cidr, 8, 102)]

  enable_nat_gateway     = true
  single_nat_gateway     = true
  one_nat_gateway_per_az = false

  tags = var.tags
}

terraform/vpc/output.tf

output "vpc_name" {
  value = module.vpc.name
}

output "vpc_id" {
  value = module.vpc.vpc_id
}

output "private_subnets" {
  value = module.vpc.private_subnets
}

output "public_subnets" {
  value = module.vpc.public_subnets
}

terraform/vpc/provider.tf

provider "aws" {
  region     = var.aws_region
  access_key = var.aws_access_key_id
  secret_key = var.aws_secret_access_key
}

terraform**/vpc/variable.tf**

variable "vpc_name" {
  description = "VPC name"
  default     = "example-vpc"
  type        = string
}

variable "vpc_cidr" {
  description = "CIDR block"
  type        = string
  default     = "10.0.0.0/16"
}

variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-west-2"
}

variable "tags" {
  description = "AWS Tags"
  type        = map(string)
  default = {
    "env"   = "qa"
    "email" = "test@rafay.co"
  }
}

variable "aws_access_key_id" {
  description = "aws access key id"
  default     = ""
  sensitive   = true
}

variable "aws_secret_access_key" {
  description = "aws secret key"
  default     = ""
  sensitive   = true
}

Loading and launching the environment

  1. Run the pipeline through UI or RCTL to sync the templates from GitHub repo to project in your organization
  2. You will see all of the above entities: env template, resource template and the config context in the dev-project
  3. You can select the EnvironmentTemplate and launch an instance of the environment by providing region info
  4. The output of the VPC created as part of this run will show up in your activity panel in UI

Congratulations!, with this, now you have successfully created a template for creating VPC.


Building a Multi-Resource Template with context variables and output

We will now evolve the above template to also

  1. Provision EKS cluster onto the created VPC
  2. Provision a namespace onto the EKS cluster

We will now add the following resource templates

  • EKS resource template
  • Namespace resource template

Code structure in repository with the new templates

├── Rafay-resources
│   ├── projects
│   │   ├── dev-project
│   │   │   ├── configcontexts
│   │   │   │   ├── artifacts/rafay-config-context/sealed-secret.yaml
│   │   │   │   ├── RafayConfigContext.yaml
│   │   │   ├── environmenttemplates
│   │   │   │   ├── EnvironmentTemplate.yaml
│   │   │   ├── resourcetemplates
│   │   │   │   ├── VPCResourceTemplate.yaml
│   │   │   │   ├── EKSResourceTemplate.yaml
│   │   │   │   ├── NamespaceResourceTemplate.yaml
├── terraform
│   ├── vpc
│   │   ├── main.tf
│   │   ├── output.tf
│   │   ├── provider.tf
│   │   ├── variable.tf
│   ├── eks
│   │   ├── main.tf
│   │   ├── output.tf
│   │   ├── provider.tf
│   │   ├── variable.tf
│   ├── namespace
│   │   ├── main.tf
│   │   ├── output.tf
│   │   ├── provider.tf
│   │   ├── variable.tf

Rafay-resources/projects/dev-project/enviromenttemplates/EnvironmentTemplate.yaml

apiVersion: eaas.envmgmt.io/v1
kind: EnvironmentTemplate
metadata:
  name: app-environment-template
  project: dev-project
spec:
  # Here we are associating an agent which will be used for running code of all involved resource templates, in this case being VPCResourcetemplate
  agents:
    - name: dev-agent
  contexts:
    # Here we are associating a configcontext that contains information related to AWS credentials. Reason a configcontext is used here , so that the same can be associated to multiple other env templates
    - name: rafay-config-context
  resources:
  # Here we are associating first resource template VPCResourceTemplate and its version.
  - kind: resourcetemplate
    name: vpc-resource-template
    resourceOptions:
      version: v1
    type: dynamic
  # Here we are associating second resource template EKSResourceTemplate and its version with the dependency on VPCResourceTemplate.
  - kind: resourcetemplate
    name: eks-resource-template
    resourceOptions:
      version: v1
    type: dynamic
    dependsOn:
      - name: vpc-resource-template
  # Here we are associating third resource template NamespaceResourceTemplate and its version with the dependency on EKSResourceTemplate.
  - kind: resourcetemplate
    name: namespace-resource-template
    resourceOptions:
      version: v1
    type: dynamic
    dependsOn:
      - name: eks-resource-template
  variables:
    # Here we are defining input collection variable to collect cluster name from end user of the template.
    # Its type is set to allowed, so that end user at environment launch time can provide region info which is free form text
    # Its marked required, so that it shows up as mandatory field in the UI
    # selector is user here to wire the input value collected for cluster name to the EKS resource template's variable. This way, platform team gets to pre-set, restrict values at only one place , which is environment template level before its shared with end developers/users
    - name: cluster_name
      valueType: text
      options:
        required: true
        override:
          type: allowed
# Here we are associating the driver for JiraAprpoval onInit hook
  hook:
    onInit:
      - name: approval
        type: driver
        options: {}
        onFailure: unspecified
        driver:
          name: jira-approval
  version: v2
  versionState: active

Note: Input definition at environment template level through variables is passed onto appropriate input variables of resource templates

Rafay-resources/projects/dev-project/resourcetemplate/EKSResourceTemplate.yaml

apiVersion: eaas.envmgmt.io/v1
kind: ResourceTemplate
metadata:
  name: eks-resource-template
  project: dev-project
spec:
  # Here we are defining provider as opentofu to use, if no providerOptions is mentioned then Rafay will use latest version
  provider: opentofu
  providerOptions:
    openTofu:
      refresh: true
      backendType: system
  repositoryOptions:
    # Here we are associating code in repository to the resource template
    branch: dev-guide-templates
    directoryPath: development-guide-templates/terraform/eks
    name: envmgr
  variables:
    - name: cluster_name
      valueType: text
      value: EKSClusterName
      options:
        override:
          type: allowed
    # Here we using the output from the VPCResourceTemplate's private_subnets information and use it for the eks_public_subnets.
    - name: eks_public_subnets
      valueType: expression
      value: $(resource."vpc-resource-template".output.public_subnets.value)$
      options:
        override:
          type: notallowed
    # Here we using the output from the VPCResourceTemplate's private_subnets information and use it for the eks_private_subnets.
    - name: eks_private_subnets
      valueType: expression
      value: $(resource."vpc-resource-template".output.private_subnets.value)$
      options:
        override:
          type: notallowed
    - name: project
      valueType: text
      value: dev-project
      options:
        override:
          type: allowed
  version: v1

***Rafay-resources/projects/dev-project/resourcetemplate/*****NamespaceResourceTemplate.yaml**

apiVersion: eaas.envmgmt.io/v1
kind: ResourceTemplate
metadata:
  name: namespace-resource-template
  project: dev-project
spec:
  # Here we are defining provider as opentofu to use, if no providerOptions is mentioned then Rafay will use latest version
  provider: opentofu
  providerOptions:
    openTofu:
      refresh: true
      backendType: system
  # Here we are associating code in repository to the resource template
  repositoryOptions:
    branch: dev-guide-templates
    directoryPath: development-guide-templates/terraform/namespace
    name: envmgr
  variables:
    # Here we specify the project name.
    - name: project
      valueType: text
      value: dev-project
      options:
        override:
          type: allowed
    # Here we using the output from the EKSResourceTemplate's eks_cluster_name information and use it for the cluster where we want namespace to be created
    - name: target_cluster_name
      valueType: expression
      value: $(resource."eks-resource-template".output.eks_cluster_name.value)$
      options:
        required: true
        override:
          type: notallowed
  version: v1

Note: The input variable target_cluster_name in namespace resource template above is chained with the EKS resource template’s output variable eks_cluster_name’s value

terraform/eks/main.tf

resource "rafay_cloud_credential" "aws_creds" {
  name         = var.aws_cloud_provider_name
  project      = var.project
  description  = "description"
  type         = "cluster-provisioning"
  providertype = "AWS"
  awscredtype  = "accesskey"
  accesskey    = var.aws_access_key_id
  secretkey    = var.aws_secret_access_key
}

resource "rafay_eks_cluster" "ekscluster-basic" {
  cluster {
    kind = "Cluster"
    metadata {
      name    = var.cluster_name
      project = var.project
    }
    spec {
      type           = "eks"
      blueprint      = "default"
      cloud_provider = rafay_cloud_credential.aws_creds.name
      cni_provider   = "aws-cni"
      proxy_config   = {}
    }
  }
  cluster_config {
    apiversion = "rafay.io/v1alpha5"
    kind       = "ClusterConfig"
    metadata {
      name    = var.cluster_name
      region  = var.aws_region
      version = var.eks_cluster_version
      tags    = var.tags
    }
    vpc {
      subnets {
        dynamic "private" {
          for_each = var.eks_private_subnets

          content {
            name = private.value
            id   = private.value
          }
        }
        dynamic "public" {
          for_each = var.eks_public_subnets

          content {
            name = public.value
            id   = public.value
          }
        }
      }
      cluster_endpoints {
        private_access = true
        public_access  = var.eks_cluster_public_access
      }
    }
    node_groups {
      name       = "ng-1"
      ami_family = "AmazonLinux2"
      iam {
        iam_node_group_with_addon_policies {
          image_builder = true
          auto_scaler   = true
        }
      }
      instance_type      = var.eks_cluster_node_instance_type
      desired_capacity   = 1
      min_size           = 1
      max_size           = 2
      max_pods_per_node  = 50
      version            = var.eks_cluster_version
      volume_size        = 80
      volume_type        = "gp3"
      private_networking = true
      subnets            = var.eks_private_subnets
      labels = {
        app       = "infra"
        dedicated = "true"
      }
      tags = var.tags
    }
  }
}

eks/outputs.tf

output "eks_cluster_name" {
  value = rafay_eks_cluster.ekscluster-basic.cluster[0].metadata[0].name
}

eks/provider.tf

terraform {
  backend "local" {}
  required_providers {
    rafay = {
      version = ">=1.1.37"
      source  = "RafaySystems/rafay"
    }
  }
  required_version = ">= 1.4.4"
}

provider "aws" {
  region     = var.aws_region
  access_key = var.aws_access_key_id
  secret_key = var.aws_secret_access_key
}

eks/variables.tf

variable "aws_region" {
  description = "Configuring AWS as provider"
  type        = string
  default     = "us-west-2"
}

variable "aws_cloud_provider_name" {
  description = "cloud credentials name"
  default     = "aws-cloud-creds-1"
}

variable "aws_access_key_id" {
  description = "aws access key"
  default     = ""
  sensitive   = true
}

variable "aws_secret_access_key" {
  description = "aws secret key"
  default     = ""
  sensitive   = true
}

variable "cluster_name" {
  description = "name of the eks cluster"
  default     = "eks-cluster-1"
}

variable "project" {
  description = "name of the project"
  default     = "dev-project"
}

variable "eks_cluster_version" {
  description = "eks cluster version"
  default     = "1.26"
}

variable "eks_cluster_node_instance_type" {
  description = "node instance type"
  default     = "t3.large"
}

variable "eks_cluster_public_access" {
  description = "public access"
  default     = true
}

variable "eks_public_subnets" {
  description = "public subnets"
}

variable "eks_private_subnets" {
  description = "private subnets"
}

variable "tags" {
  description = "AWS Tags"
  type        = map(string)
  default = {
    "env"   = "qa"
    "email" = "test@rafay.co"
  }
}

namespace/main.tf

resource "random_id" "rnd" {
  keepers = {
    first = "${timestamp()}"
  }
  byte_length = 4
}

locals {
  # Create a unique namspace name
  namespace = "${var.project}-${random_id.rnd.dec}"
}

resource "rafay_namespace" "namespace" {
  metadata {
    name    = local.namespace
    project = var.project
  }
  spec {
    drift {
      enabled = true
    }
    placement {
      labels {
        key   = "rafay.dev/clusterName"
        value = var.target_cluster_name
      }
    }
    resource_quotas {
      config_maps              = "10"
      cpu_limits               = "4000m"
      memory_limits            = "4096Mi"
      cpu_requests             = "2000m"
      memory_requests          = "2048Mi"
      persistent_volume_claims = "2"
      pods                     = "30"
      replication_controllers  = "5"
      services                 = "10"
      services_load_balancers  = "10"
      services_node_ports      = "10"
      storage_requests         = "1Gi"
    }
  }
}

namespace/outputs.tf

output "namepsace" {
  value = local.namespace
}

namespace/provider.tf

terraform {
  backend "local" {}
  required_providers {
    rafay = {
      version = ">=1.1.37"
      source  = "RafaySystems/rafay"
    }
  }
  required_version = ">= 1.4.4"
}

namespace/variables.tf

variable "target_cluster_name" {
  description = "name of the eks cluster"
  default     = "eks-cluster-1"
}

variable "project" {
  description = "name of the project where the cluster resides"
  type        = string
  default     = "eaas"
}

Loading and launching the environment

  1. Run the pipeline through UI or RCTL to sync the templates from GitHub repo to project in your organization
  2. You will see all of the above entities: env template, resource templates and the config context in the dev-project
  3. You can select version v2 of the EnvironmentTemplate and launch an instance of the environment by providing input information
  4. The output of the run will be
  5. VPC creation
  6. EKS cluster provision onto the VPC (EKS cluster can be seen in the clusters list in the project )
  7. Namespace creation on the EKS cluster (output displayed in the activity panel )

Congratulations!, with this, now you have successfully created a template for creating VPC, EKS Cluster and a namespace.


Building a template with a schedule using container driver

Container Driver in file “AppResizeDriver.yaml”

apiVersion: eaas.envmgmt.io/v1
kind: Driver
metadata:
  name: appresize
  project: dev-project
spec:
  config:
    type: container
    timeoutSeconds: 600
    container:
      image: 'registry.dev.rafay-edge.net/rafay/resize:0.2'
      # Run the resize logic with dry run mode to recommend sizing adjsutments in output and for all namespaces on the cluster
      arguments:
        - '- /opt/resize.py'
        - '- --dry-run'
        - '- --all-namespaces'
      commands:
        - python3
      workingDirPath: /tmp
      cpuLimitMilli: '500'
      memoryLimitMb: '1024'
status: {}

We will now enhance our EnvironmentTemplate and add “Scheduled runs” of “appresize” driver logic onto the cluster for sizing the deployments. “Appresize” looks at utilization metrics of pods running on the cluster and suggests appropriate resource requests/limits to rightsize applications (and reduce infrastructure spend).

apiVersion: eaas.envmgmt.io/v1
kind: EnvironmentTemplate
metadata:
  name: app-environment-template
  project: dev-project
spec:
  # Here we are associating an agent which will be used for running code of all involved resource templates, in this case being VPCResourcetemplate
  agents:
    - name: dev-agent
  contexts:
    # Here we are associating a configcontext that contains information related to AWS credentials. Reason a configcontext is used here , so that the same can be associated to multiple other env templates
    - name: rafay-config-context
  resources:
  # Here we are associating first resource template VPCResourceTemplate and its version.
  - kind: resourcetemplate
    name: vpc-resource-template
    resourceOptions:
      version: v1
    type: dynamic
  # Here we are associating second resource template EKSResourceTemplate and its version with the dependency on VPCResourceTemplate.
  - kind: resourcetemplate
    name: eks-resource-template
    resourceOptions:
      version: v1
    type: dynamic
    dependsOn:
      - name: vpc-resource-template
  # Here we are associating third resource template NamespaceResourceTemplate and its version with the dependency on EKSResourceTemplate.
  - kind: resourcetemplate
    name: namespace-resource-template
    resourceOptions:
      version: v1
    type: dynamic
    dependsOn:
      - name: eks-resource-template
  variables:
    # Here we are defining input collection variable to collect cluster name from end user of the template.
    # Its type is set to allowed, so that end user at environment launch time can provide region info which is free form text
    # Its marked required, so that it shows up as mandatory field in the UI
    # selector is user here to wire the input value collected for cluster name to the EKS resource template's variable. This way, platform team gets to pre-set, restrict values at only one place , which is environment template level before its shared with end developers/users
    - name: cluster_name
      valueType: text
      options:
        required: true
        override:
          type: allowed
# Here we are setting the schedule to run appresize nightly
  schedules:
    - name: 'daily-schedule '
      type: workflows
      cadence:
        cronExpression: 59 23 * * *
        cronTimezone: America/Los_Angeles
      workflows:
        tasks:
          - name: AppResizeRun
            type: driver
            options: {}
            agents:
              - name: dev-agent
            onFailure: unspecified
            driver:
              name: appresize
      optOutOptions:
        allowOptOut: false
  version: v3
  versionState: active

Loading and launching the environment

  1. Run the pipeline through UI or RCTL to sync the templates from GitHub repo to project in your organization
  2. You will see all of the above entities: env template, resource templates and the config context in the dev-project
  3. You can select the EnvironmentTemplate and version v3, and launch an instance of the environment by providing input information
  4. The output of the run will be
  5. VPC creation
  6. EKS cluster provision onto the VPC (EKS cluster can be seen in the clusters list in the project)
  7. Namespace creation on the EKS cluster (output displayed in the activity panel)
  8. Scheduled run of the appresize will happen at 11:59PM daily

Congratulations!, with this, now you have successfully created a self-service template for creating VPC, EKS Cluster, namespace and a scheduled job run for application rightsizing.


Building a Template with Pre/Post Hooks using function driver

We will now evolve our Environment template to have an approval hook as a pre-hook for environment provisioning i.e. environment deployment will only proceed on an explicit approval.

In this example, we will create a Jira Approval workflow by creating a function driver and attaching to the environment with onInit hook.

JiraApproval logic as part of the workflow will create a Jira ticket and wait on a certain jira state. After the Jira ticket is approved, Infrastructure resources will be created. If approval is denied, infrastructure resources will not be created

Function Driver in file “JiraApproval.yaml”

apiVersion: eaas.envmgmt.io/v1
kind: Driver
metadata:
  name: system-jira-v01
  project: catalog-templates
spec:
  inputs:
    - name: inputvars
      data:
        variables:
          - name: debug
            value: "False"
            valueType: TEXT
            options:
              description: "Enables the verbose in the logs"
              override:
                type: allowed
          # jira connection parameters
          - name: jira_fqdn
            value: <jira fqdn>
            valueType: TEXT
            options:
              description: "Jira FQDN"
              override:
                type: allowed
          - name: jira_api_user
            value: <api user>
            valueType: TEXT
            options:
              description: "Jira API User"
              override:
                type: allowed
          - name: jira_api_token
            value: <api token>
            valueType: TEXT
            options:
              description: "Jira API Key"
              override:
                type: allowed
          - name: jira_approved_state
            value: "Approved"
            valueType: TEXT
            options:
              description: "Jira ticket state for approval"
              override:
                type: allowed
          - name: jira_denied_state
            value: "Denied"
            valueType: TEXT
            options:
              description: "Jira ticket state for denied"
              override:
                type: allowed
          - name: jira_project
            value: "EM"
            valueType: TEXT
            options:
              description: "Jira project name"
              override:
                type: allowed
          # Jira ticket parameters
          - name: short_description
            value: "Jira ticket from Approval"
            valueType: TEXT
            options:
              description: "Jira ticket text"
              override:
                type: allowed
          - name: description
            value: "Jira ticket from Approval description"
            valueType: TEXT
            options:
              description: "Jira ticket description"
              override:
                type: allowed
          - name: assignee
            value: <api user>
            valueType: TEXT
            options:
              description: "Jira ticket state for denied"
              override:
                type: allowed
          - name: priority
            value: "Low"
            valueType: TEXT
            options:
              description: "Jira ticket priority"
              override:
                type: allowed
  config:
    type: function
    timeoutSeconds: 300
    pollingConfig:
      repeat: 15s
      until: 1h
    function:
      cpuLimitMilli: "50"
      memoryLimitMi: "128"
      language: python
      languageVersion: "3.6"
      maxConcurrency: 10
      numReplicas: 1
      source: |
        from typing import *
        import requests
        from requests.auth import HTTPBasicAuth
        import json
        from logging import Logger
        from python_sdk_rafay_workflow import sdk
        import re
        from requests.adapters import HTTPAdapter

        headers = {
            'Content-Type': 'application/json',
            'Accept': 'application/json'
        }

        class Config:
            ### generic
            debug: str  # true or false

            ### jira connection configuration
            jira_fqdn: str
            jira_api_user: str
            jira_api_token: str
            jira_approved_state: str
            jira_denied_state: str
            jira_project: str

            ### jira ticket configuration
            short_description: str
            description: str
            assignee: str
            priority: str

            ## internal
            assignee_id: str
            jira_id: str

        def validate_inputs(request: Dict[str, Any]) -> None:
            # Check jira_fqdn
            if not request.get('jira_fqdn'):
                raise ValueError("jira_fqdn can not be empty.")
            if not re.match(r'^[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$', request['jira_fqdn']):
                raise ValueError("jira_fqdn must be a valid domain format.")

            # Check jira_api_user
            if not request.get('jira_api_user'):
                raise ValueError("jira_api_user can not be empty.")
            if not re.match(r'^[\w\.-]+@[\w\.-]+\.\w+$', request['jira_api_user']):
                raise ValueError("jira_api_user must be a valid email format.")

            # Check jira_api_token
            if not request.get('jira_api_token'):
                raise ValueError("jira_api_token can not be empty.")

            # Check jira_project
            if not request.get('jira_project'):
                raise ValueError("jira_project can not be empty.")

            # Check assignee
            if not request.get('assignee'):
                raise ValueError("assignee can not be empty.")
            if not re.match(r'^[\w\.-]+@[\w\.-]+\.\w+$', request['assignee']):
                raise ValueError("assignee must be a valid email format.")

            # Check priority
            priority = request.get('priority')
            if not priority:
                raise ValueError("priority can not be empty.")


        def create_issue(logger: Logger, conf: Config):
            url = "https://" + conf.jira_fqdn + "/rest/api/3/issue"
            auth = HTTPBasicAuth(conf.jira_api_user, conf.jira_api_token)

            headers = {
                "Accept": "application/json",
                "Content-Type": "application/json"
            }

            payload = json.dumps({
                "fields": {
                    "assignee": {
                        "accountId": conf.assignee_id
                    },
                    "description": {
                        "type": "doc",
                        "version": 1,
                        "content": [
                            {
                                "type": "paragraph",
                                "content": [
                                    {
                                        "type": "text",
                                        "text": conf.short_description,
                                    }
                                ],
                            }
                        ],
                    },
                    "issuetype": {
                        "name": "Task"
                    },
                    "project": {
                        "key": conf.jira_project
                    },
                    "priority": {
                        "name": conf.priority
                    },
                    "summary": conf.description,
                },
            })

            if conf.debug == True:
                logger.debug("Request to the server for Jira ticket creation")
                logger.debug(payload)

            response = requests.request(
                "POST",
                url,
                data=payload,
                headers=headers,
                auth=auth
            )

            if conf.debug == True:
                logger.debug(f"Response from the server for the Jira ticket {conf.jira_id}")
                logger.debug(json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")))

            if response.status_code == 201:
                logger.info("Jira ticket created successfully")
                return response.json()
            else:
                logger.error(f"Failed to create Jira ticket: {response.status_code}, {response.text}")
                return None

        def get_user(logger: Logger, conf: Config, user: str):
            url = "https://" + conf.jira_fqdn + "/rest/api/3/user/search"
            auth = HTTPBasicAuth(conf.jira_api_user, conf.jira_api_token)
            headers = {
                "Accept": "application/json"
            }

            query = {
                'query': user
            }
            response = requests.request(
                "GET",
                url,
                headers=headers,
                params=query,
                auth=auth
            )

            if conf.debug == True:
                logger.debug(f"Response from the server for the user: {user}")
                logger.debug(json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")))

            return json.loads(response.text)[0]['accountId']

        def get_status(logger: Logger, conf: Config):
            url = "https://" + conf.jira_fqdn + "/rest/api/3/issue/" + conf.jira_id
            status = "Wait"
            auth = HTTPBasicAuth(conf.jira_api_user, conf.jira_api_token)
            headers = {
                "Accept": "application/json"
            }
            response = requests.request(
                "GET",
                url,
                headers=headers,
                auth=auth
            )

            if conf.debug == True:
                logger.debug(f"Response from the server for the ticket status: {conf.jira_id}")
                logger.debug(json.dumps(json.loads(response.text), sort_keys=True, indent=4, separators=(",", ": ")))

            status = json.loads(response.text)['fields']['status']['name']
            return status

        def handle(logger: Logger, request: Dict[str, Any]) -> Dict[str, Any]:
            try:
                conf = Config()
                conf.debug = request['debug'] if 'debug' in request else ''
                conf.jira_fqdn = request['jira_fqdn'] if 'jira_fqdn' in request else ''
                conf.jira_api_user = request['jira_api_user'] if 'jira_api_user' in request else ''
                conf.jira_api_token = request['jira_api_token'] if 'jira_api_token' in request else ''
                conf.jira_approved_state = request['jira_approved_state'] if 'jira_approved_state' in request else ''
                conf.jira_denied_state = request['jira_denied_state'] if 'jira_denied_state' in request else ''

                conf.short_description = request['short_description'] if 'short_description' in request else ''
                conf.description = request['description'] if 'description' in request else ''
                conf.assignee = request['assignee'] if 'assignee' in request else ''
                conf.jira_project = request['jira_project'] if 'jira_project' in request else ''
                conf.priority = request['priority'] if 'priority' in request else ''

                # Validate all required fields
                validate_inputs(request)

                logger.info("Checking if ticket exists")
                counter = request['previous'].get('counter', 0) if 'previous' in request else 0
                id = request['previous'].get('ticket_id', '') if 'previous' in request else ''
                key = request['previous'].get('ticket_key', '') if 'previous' in request else ''
                conf.jira_id = id

                if id:
                    status = get_status(logger, conf)
                else:
                    logger.info("Creating ticket")
                    accountId = get_user(logger, conf, conf.assignee)
                    conf.assignee_id = accountId

                    ticket = create_issue(logger, conf)

                    id = ticket['id']
                    key = ticket['key']
                    conf.jira_id = id
                    logger.info(f"Ticket created {key}")
                    status = get_status(logger, conf)
            except ConnectionError as e:
                logger.error(f"Failed to connect to the Jira server: {str(e)}")
                raise sdk.TransientException(f"Failed to connect to the Jira server: {str(e)}")
            except Exception as e:
                logger.error(f"FailedException: {str(e)}")
                raise sdk.FailedException(f"FailedException: {str(e)}")

            if status == conf.jira_approved_state:
                logger.info(f"Jira ticket no {key} with id {id} is Approved")
                return {"status": "Resolved", "ticket_id": key, "counter": counter + 1}
            elif status == conf.jira_denied_state:
                logger.info(f"Jira ticket {id} is Denied")
                raise sdk.FailedException(f"Jira ticket no {key} with id {id} is Denied", ticket_id=id, ticket_key=key)
            else:
                logger.info(f"Waiting for the Jira ticket {id} to be approved")
                raise sdk.ExecuteAgainException(f"Waiting for the Jira ticket no {key} with id {id} to be approved", ticket_id=id, ticket_key=key, counter=counter + 1)

We will now enhance the EnvironmentTemplate with the JiraApproval to trigger the approval workflow when an environment is initiated.

apiVersion: eaas.envmgmt.io/v1
kind: EnvironmentTemplate
metadata:
  name: app-environment-template
  project: dev-project
spec:
  # Here we are associating an agent which will be used for running code of all involved resource templates, in this case being VPCResourcetemplate
  agents:
    - name: dev-agent
  contexts:
    # Here we are associating a configcontext that contains information related to AWS credentials. Reason a configcontext is used here , so that the same can be associated to multiple other env templates
    - name: rafay-config-context
  resources:
  # Here we are associating first resource template VPCResourceTemplate and its version.
  - kind: resourcetemplate
    name: vpc-resource-template
    resourceOptions:
      version: v1
    type: dynamic
  # Here we are associating second resource template EKSResourceTemplate and its version with the dependency on VPCResourceTemplate.
  - kind: resourcetemplate
    name: eks-resource-template
    resourceOptions:
      version: v1
    type: dynamic
    dependsOn:
      - name: vpc-resource-template
  # Here we are associating third resource template NamespaceResourceTemplate and its version with the dependency on EKSResourceTemplate.
  - kind: resourcetemplate
    name: namespace-resource-template
    resourceOptions:
      version: v1
    type: dynamic
    dependsOn:
      - name: eks-resource-template
  variables:
    # Here we are defining input collection variable to collect cluster name from end user of the template.
    # Its type is set to allowed, so that end user at environment launch time can provide region info which is free form text
    # Its marked required, so that it shows up as mandatory field in the UI
    # selector is user here to wire the input value collected for cluster name to the EKS resource template's variable. This way, platform team gets to pre-set, restrict values at only one place , which is environment template level before its shared with end developers/users
    - name: cluster_name
      valueType: text
      options:
        required: true
        override:
          type: allowed
  # Here we are setting the schedule to run appresize nightly
  schedules:
    - name: "daily-schedule "
      type: workflows
      cadence:
        cronExpression: 59 23 * * *
        cronTimezone: America/Los_Angeles
      workflows:
        tasks:
          - name: AppResizeRun
            type: driver
            options: {}
            agents:
              - name: dev-agent
            onFailure: unspecified
            driver:
              name: appresizer
      optOutOptions:
        allowOptOut: false
  # Here we are associating the driver for JiraAprpoval onInit hook
  hook:
    onInit:
      - name: approval
        type: driver
        options: {}
        onFailure: unspecified
        driver:
          name: system-jira-v01
    version: v4
  versionState: active

Congratulations!, with this, now you have successfully created a self-service template for creating VPC, EKS Cluster, namespace with an approval hook.

The complete working example can be found at the following Git location:

https://github.com/RafaySystems/envmgr/tree/main/development-guide-templates.