Expressions
The Expressions feature enhances the Environment Manager's flexibility and power by enabling dynamic input configuration. It currently supports writing expressions using Starlark and Cue Language.
Expressions allow users to dynamically configure input variables by leveraging the outputs of other resources or configurations. This capability helps platform teams create programmable templates for provisioning infrastructure and applications.
Using resource outputs as inputs for other resources involves taking the output or data from one resource and utilizing it as input or configuration for another. For example, in the case of a Terraform resource template, any output variables defined in the OpenTofu/Terraform configuration represent the resource's outputs. These outputs can then be used to configure other resources. Currently, this functionality supports only OpenTofu/Terraform.
Below is an example of a VPC output containing values that can be used as inputs for other resources:
output "vpc_id" {
value = module.vpc.vpc_id
}
Application deployments¶
Consider an example of an environment named env1, created using an environment template called envtemp. This environment includes resources such as a VPC, RDS, and a Rafay-managed Kubernetes (K8s) cluster, with both the K8s cluster and RDS depending on the VPC.
A developer or data scientist who wants to deploy an application to the K8s cluster can avoid hardcoding the workload or application manifest with specific environment details. Instead, they can utilize the following expression:
$(#environment["resource_name"]["varname"])$
Let's assume that the application needs to include details of the RDS instance. Below is an example snippet of the output of the RDS resource template.
output "rds_hostname" {
description = "RDS instance hostname"
value = aws_db_instance.db.address
sensitive = true
}
For this specific output example, the workload expression would be:
$(#environment["rds"]["rds_hostname"])$
Below is an example of a workload Helm value file with expressions, where RDS output values are passed as input values.
env:
celeryBrokerUrl: 'redis://$(#environment["ec-resoource-tmpl"].configuration_endpoint_address):6379/0'
celeryResultBackend: 'redis://$(#environment["ec-resoource-tmpl"].configuration_endpoint_address):6379/0'
debug: "True"
djangoDb: postgresql
postgresHost: '$(#environment["rds-resource-tmpl"].rds_hostname)'
postgresName: postgres
postgresPassword: '$(#environment["rds-resource-tmpl"].rds_password)'
postgresPort: '$(#environment["rds-resource-tmpl"].rds_port)'
postgresUser: '$(#environment["rds-resource-tmpl"].rds_username)'
If the variable type is a list (array), you can use indexing to reference specific values. For instance, to set up a load balancer with multiple IPs, you can use the following expression:
$(resource."resource_name".output.otest1.value)$
Dynamic and Static Resources¶
In some scenarios, the output of a dynamic or static resource may need to be passed to another resource during the environment creation process. The following expression can be used for such cases:
$(resource."resource_name".output.otest1.value)$
Consider a scenario where a resource template includes dynamic resources, such as a VPC and RDS, and you need to ensure the RDS is installed within a specific VPC by passing the VPC ID to the RDS. This can be achieved by using the following expression to pass the VPC's output to the RDS:
$(resource."vpc".output.vpc_id.value)$
The above expression references the output value 'vpc_id' from the 'vpc' resource.
Similarly to dynamic resources, static resource outputs can also be passed to another resource.
Static Environment Resource¶
To pass the output of a static environment resource to another resource, use the below expression:
$(resource."static-env-0".resource."resource-2".output.test3.value)$
Let's take the example where an environment named "env1" has already been created. If the user intends to create an additional environment with the "rds" resource on top of "env1" without modifying the existing "env1", use the expression outlined below:
$(resource."env1".resource."rds".output.rds_hostname.value)$
The above expression employs the 'env1' environment name, the static resource name 'rds', and references the output value labeled as 'rds_hostname'. The values can be of any type, such as array, map, and string.
Users can utilize this expression to include 'env1' as a static resource in the new environment alongside the RDS.
Dedicated Resource(s) Scenario¶
When creating an environment template, the user has the ability to add dynamic or static resources (pre-existing resources).
When the "dynamic" option is selected for resources, users are allowed to enable the dedicated option. When the dedicated option is enabled, dedicated resources are only brought up when a workload is published to that environment.
To use the workload attributes as part of the dedicated resource(s), use the following expression:
$(workload.name)$
(or)
$(workload.id)$
If ElastiCache is a dedicated resource that needs to be brought up for a workload, the user can use the workload name (workload.name) as input to the name variable of the ElastiCache. So, whenever a workload is published, the dedicated ElastiCache comes up with the name of the workload.
Note: If you are leveraging the Rafay Workloads, you need to ensure to select Environment Configuration based on the requirement, so that it can be used in the resource template. When the workload is unpublished/destroyed, the dedicated resources within that workload will also be destroyed.
Environment Resource(s)¶
Assume a scenario where a user attempts to set up three to four environments using an environment template, and all the resources are being launched in the same AWS region. To prevent naming conflicts, utilize the following expression to create them with dynamic names and IDs.
$(environment.name)$
$(environment.id)$
$(environment.project.name)$
$(environment.project.id)$
$(environment.labels.key)$
Resource Artifact Variables¶
There are instances where the user intends to download Terraform working directory at different stages such as Init, Plan (pre-hook & post-hook), Apply, and Output. Users can download the required artifacts via container hooks using the expression below (from the working directory as a token and url). This enables users to access the artifacts and perform necessary actions such as scanning of terraform code, calculate the cost of the infrastructure etc.
You can use the expressions below to download artifacts:
Artifact Activity
$(resource."rtdependson-0".artifact.workdir.token)$
$(resource."rtdependson-0".artifact.workdir.url)$
Plan Activity
$(resource."rtdependson-0".plan.workdir.token)$
$(resource."rtdependson-0".plan.workdir.url)$
Trigger Expressions¶
Trigger expressions are used to enable dynamic event-driven responses within systems by allowing users to leverage the output of trigger events as input for subsequent actions or workflows.
Use the below trigger expressions to dynamically configure actions and workflows based on event triggers.
- trigger.payload.is_sso_user: To check if the user logged in using Single Sign-On credentials
- trigger.payload.userid: Retrieves the user's identification, typically their email address
- trigger.payload.username: Fetches the username, which is usually the user's email address
- trigger.payload.type (type: destroy, force-destroy, force-release-lock): Identifies the type of action triggered, such as deletion or lock release
- trigger.id: Provides the unique identifier for the trigger event
- trigger.payload: Represents all data associated with the trigger event
- trigger.reason: Captures the reason behind the trigger event
Examples
- When users initiate resource deletion actions, such as "destroy" or "force-destroy," the trigger expression trigger.payload.type identifies the type of deletion request. Based on the detected type, the system dynamically initiates automated cleanup processes, ensuring that associated resources are properly released and decommissioned
- When a user attempts to log in to the controller, the trigger expression trigger.payload.is_sso_user evaluates whether the user has authenticated using the Single Sign-On mechanism. If the user is an SSO user, expression returns true
Types of Expressions¶
Three types of expressions are supported, allowing users to retrieve values from outputs based on their complexity.
- cue expressions syntax is (expression)
- starlark expressions syntax is #{expression}#
- JSONPath expressions syntax is @{expression}@
Assume following is the environment output data available for expressions.
{
"resource": {
"rt-wf": {
"task": {
"crucial-task": {
"output": {
"host": "eaas.io"
}
}
}
},
"rt-1": {
"output": {
"workdir": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/job.tar.zst",
"token": "*****"
},
"files": {
"job.tar.zst": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/job.tar.zst",
"token": "*****"
},
"stdout": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/stdout",
"token": "*****"
}
},
"create_id": {
"sensitive": false,
"type": "string",
"value": "4713029731612950118"
},
"destroy_id": {
"sensitive": false,
"type": "string",
"value": "7718207542795149585"
},
"otest1": {
"sensitive": false,
"type": "string",
"value": "edependson-0"
},
"otest2": {
"sensitive": false,
"type": "string",
"value": "default-test2"
},
"otest3": {
"sensitive": false,
"type": "string",
"value": "default-test3"
},
"otest4": {
"sensitive": false,
"type": "string",
"value": "default-test4"
},
"otestall": {
"sensitive": false,
"type": "string",
"value": "edependson-0, default-test2, default-test3, default-test4"
}
},
"artifact": {
"workdir": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/job.tar.zst",
"token": "*****"
}
},
"init": {
"workdir": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/job.tar.zst",
"token": "*****"
}
},
"plan": {
"workdir": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/job.tar.zst",
"token": "*****"
}
},
"apply": {
"workdir": {
"url": "http://chisel-127-0-0-1.nip.io:58508/download/job.tar.zst",
"token": "*****"
}
}
}
},
"trigger": {
"id": "01H77ZN0B7HV9G2V8JJ6EG9X7G",
"createdAt": "2023-08-07T12:30:48.167581Z",
"modifiedAt": "2023-08-07T12:30:48.167582Z",
"scope": {
"parent": {
"kind": "environment",
"id": "01H77ZN0882M7RX8HNWQCSX98K",
"name": "env-1",
"partnerId": "partner007",
"organizationId": "organization007",
"projectId": "project007"
}
},
"payload": {
"type": "deploy",
"userid": "t$1d01",
"username": "test-user"
},
"status": "TRIGGER_EVENT_STATUS_PENDING",
"version": 2
},
"environment": {
"labels": {
"environment": "env-1"
},
"project": {
"id": "project007",
"name": "project007"
},
"organization": {
"id": "organization007",
"name": "organization007"
},
"partner": {
"id": "partner007",
"name": "partner007"
},
"name": "env-1",
"id": "01H77ZN0882M7RX8HNWQCSX98K"
},
"repository": {
"name": "repo1",
"id": "rx28oml",
"partner": {
"id": "partner007"
},
"project": {
"id": "project007"
},
"organization": {
"id": "organization007"
}
},
"workload": {
"name": "workload1",
"id": "rx28oml",
"partner": {
"id": "partner007"
},
"project": {
"id": "project007"
},
"organization": {
"id": "organization007"
}
}
}
Description | Cue | Starlark | JSONPath |
---|---|---|---|
dynamic resource’s output value using object notation | (resource.”rt-1”.output.otest1.value) | #{resource.”rt-1”.output.otest1.value}# | @{resource.”rt-1”.output.otest1.value}@ |
dynamic resource’s output value using index notation | (resource\[”rt-1”\].output.otest1.value) | #{resource[”rt-1”].output.otest1.value}# | @{resource[”rt-1”].output.otest1.value}@ |
static resource output value | (resource\[”static-rt-1”\].output.otest1.value) | #{resource[”static-rt-1”].output.otest1.value}# | @{resource[”static-rt-1”].output.otest1.value}@ |
static environment output value | (resource\["static-env-0"\].resource\["rt-0"\].output.otest2.value) | #{resource["static-env-0"].resource["rt-0"].output.otest2.value}# | @{resource["static-env-0"].resource["rt-0"].output.otest2.value}@ |
resource artifacts from artifact activity | (resource\["rt-1"\].artifact.workdir.token) (resource\["rt-1"\].artifact.workdir.url) | #{resource["rt-1"].artifact.workdir.token}# #{resource["rt-1"].artifact.workdir.url}# | @{resource["rt-1"].artifact.workdir.token}@ @{resource["rt-1"].artifact.workdir.url}@ |
resource artifacts from output activity | (resource\["rt-1"\].output.workdir.token) (resource\["rt-1"\].output.workdir.url) (resource\["rt-1"\].output.files.stdout.url) (resource\["rt-1"\].output.files.stdout.token) | #{resource["rt-1"].output.workdir.token}# #{resource["rt-1"].output.workdir.url}# #{resource["rt-1"].output.files.stdout.url}# #{resource["rt-1"].output.files.stdout.token}# | @{resource["rt-1"].output.workdir.token}@ @{resource["rt-1"].output.workdir.url}@ @{resource["rt-1"].output.files.stdout.url}@ @{resource["rt-1"].output.files.stdout.token}@ |
environment details | (environment.name) (environment.id) (environment.project.name) (environment.project.id) (environment.labels.key) | #{environment.name}# #{environment.id}# #{environment.project.name}# #{environment.project.id}# #{environment.labels.key}# | @{environment.name}@ @{environment.id}@ @{environment.project.name}@ @{environment.project.id}@ @{environment.labels.key}@ |
workload details(for workload trigger) | (workload.name) (workload.id) | #{workload.name}# #{workload.id}# | @{workload.name}@ @{workload.id}@ |
repository details(for repository trigger) | (repository.name) (repository.id) | #{repository.name}# #{repository.id}# | @{repository.name}@ @{repository.id}@ |
custom resource’s output’s value | (resource\["rt-wf"\].task\["crucial-task"\].output.host) | #{resource["rt-wf"].task["crucial-task"].output.host}# | @{resource["rt-wf"].task["crucial-task"].output.host}@ |
environment lifecycle hook outputs | (environment.hook.onInit.hook\_name.output) (environment.hook.onSuccess.hook\_name.output) (environment.hook.onFailure.hook\_name.output) (environment.hook.onCompletion.hook\_name.output) | #{environment.hook.onInit.hook_name.output}# #{environment.hook.onSuccess.hook_name.output}# #{environment.hook.onFailure.hook_name.output}# #{environment.hook.onCompletion.hook_name.output}# | @{environment.hook.onInit.hook_name.output}@ @{environment.hook.onSuccess.hook_name.output}@ @{environment.hook.onFailure.hook_name.output}@ @{environment.hook.onCompletion.hook_name.output}@ |
resource lifecycle hook outputs | (resource.resource\_name.hook.onInit.hook\_name.output) (resource.resource\_name.hook.onSuccess.hook\_name.output) (resource.resource\_name.hook.onFailure.hook\_name.output) (resource.resource\_name.hook.onCompletion.hook\_name.output) | #{resource.resource_name.hook.onInit.hook_name.output}# #{resource.resource_name.hook.onSuccess.hook_name.output}# #{resource.resource_name.hook.onFailure.hook_name.output}# #{resource.resource_name.hook.onCompletion.hook_name.output}# | @{resource.resource_name.hook.onInit.hook_name.output}@ @{resource.resource_name.hook.onSuccess.hook_name.output}@ @{resource.resource_name.hook.onFailure.hook_name.output}@ @{resource.resource_name.hook.onCompletion.hook_name.output}@ |
resource provider’s deploy lifecycle hook outputs | (resource.template\_name.hook.deploy.init.before.hook\_name.output) (resource.template\_name.hook.deploy.init.after.hook\_name.output) (resource.template\_name.hook.deploy.plan.before.hook\_name.output) (resource.template\_name.hook.deploy.plan.after.hook\_name.output) (resource.template\_name.hook.deploy.apply.before.hook\_name.output) (resource.template\_name.hook.deploy.apply.after.hook\_name.output) (resource.template\_name.hook.deploy.output.before.hook\_name.output) (resource.template\_name.hook.deploy.output.after.hook\_name.output) | #{resource.template_name.hook.deploy.init.before.hook_name.output}# #{resource.template_name.hook.deploy.init.after.hook_name.output}# #{resource.template_name.hook.deploy.plan.before.hook_name.output}# #{resource.template_name.hook.deploy.plan.after.hook_name.output}# #{resource.template_name.hook.deploy.apply.before.hook_name.output}# #{resource.template_name.hook.deploy.apply.after.hook_name.output}# #{resource.template_name.hook.deploy.output.before.hook_name.output}# #{resource.template_name.hook.deploy.output.after.hook_name.output}# | @{resource.template_name.hook.deploy.init.before.hook_name.output}@ @{resource.template_name.hook.deploy.init.after.hook_name.output}@ @{resource.template_name.hook.deploy.plan.before.hook_name.output}@ @{resource.template_name.hook.deploy.plan.after.hook_name.output}@ @{resource.template_name.hook.deploy.apply.before.hook_name.output}@ @{resource.template_name.hook.deploy.apply.after.hook_name.output}@ @{resource.template_name.hook.deploy.output.before.hook_name.output}@ @{resource.template_name.hook.deploy.output.after.hook_name.output}@ |
resource provider’s destroy lifecycle hook outputs | (resource.template\_name.hook.destroy.init.before.hook\_name.output) (resource.template\_name.hook.destroy.init.after.hook\_name.output) (resource.template\_name.hook.destroy.plan.before.hook\_name.output) (resource.template\_name.hook.destroy.plan.after.hook\_name.output) (resource.template\_name.hook.destroy.destroy.before.hook\_name.output) (resource.template\_name.hook.destroy.destroy.after.hook\_name.output) | #{resource.template_name.hook.destroy.init.before.hook_name.output}# #{resource.template_name.hook.destroy.init.after.hook_name.output}# #{resource.template_name.hook.destroy.plan.before.hook_name.output}# #{resource.template_name.hook.destroy.plan.after.hook_name.output}# #{resource.template_name.hook.destroy.destroy.before.hook_name.output}# #{resource.template_name.hook.destroy.destroy.after.hook_name.output}# | @{resource.template_name.hook.destroy.init.before.hook_name.output}@ @{resource.template_name.hook.destroy.init.after.hook_name.output}@ @{resource.template_name.hook.destroy.plan.before.hook_name.output}@ @{resource.template_name.hook.destroy.plan.after.hook_name.output}@ @{resource.template_name.hook.destroy.destroy.before.hook_name.output}@ @{resource.template_name.hook.destroy.destroy.after.hook_name.output}@ |
workload expressions to refer resource’s output | (\#environment\["rt-1"\]\["otest1"\]) | #{#environment["rt-1"]["otest1"]}# | @{#environment["rt-1"]["otest1"]}@ |
drivers can use input variables as expressions | (current.input\["variable-name"\]) | #{current.input["variable-name"]}# | @{current.input["variable-name"]}@ |
driver output as expressions | (current.output\["variable-name"\]) | #{current.output["variable-name"]}# | @{current.output["variable-name"]}@ |
The expressions mentioned above are straightforward and easy to use. However, there may be instances where users need to query more complex JSON outputs.
For example, consider the JSON structure provided below as part of a resource output. Let’s explore how to write more advanced queries to extract the desired results.
{
"resource": {
"rafay-vms": {
"task": {
"task1": {
"output": {
"vms": [
{
"id": "vm-001",
"name": "web-server-1",
"type": "t2.micro",
"region": "us-east-1",
"status": "running",
"ip_address": "192.168.1.10",
"cpu_cores": 1,
"memory_size_mb": 1024,
"disk_size_gb": 30,
"tags": {
"environment": "production",
"role": "web"
}
},
{
"id": "vm-002",
"name": "web-server-2",
"type": "t2.micro",
"region": "us-east-1",
"status": "stopped",
"ip_address": "192.168.1.11",
"cpu_cores": 1,
"memory_size_mb": 1024,
"disk_size_gb": 30,
"tags": {
"environment": "dev",
"role": "web"
}
},
{
"id": "vm-003",
"name": "app-server-1",
"type": "t3.medium",
"region": "us-east-1",
"status": "running",
"ip_address": "192.168.1.12",
"cpu_cores": 2,
"memory_size_mb": 4096,
"disk_size_gb": 50,
"tags": {
"environment": "production",
"role": "application"
}
},
{
"id": "vm-004",
"name": "db-server-1",
"type": "t3.large",
"region": "us-east-1",
"status": "running",
"ip_address": "192.168.1.13",
"cpu_cores": 2,
"memory_size_mb": 8192,
"disk_size_gb": 100,
"tags": {
"environment": "dev",
"role": "database"
}
}
]
}
}
}
}
}
}
Let’s assume x = resource."rafay-vms".task.task1.output, so it’s easier for readability, please replace x with resource."rafay-vms".task.task1.output when using expressions.
Description | Cue | Starlark | JSONPath |
---|---|---|---|
Get the ith VM object from the list, replace it with the number in the expression (starts from 0) | (x.vms\[i\]) | #{x.vms[i]}# | @{x.vms[i]}@ |
Get list of all VMs | (x.vms) | #{x.vms}# | @{.x.vms}@ @{.x.vms[*]}@ //wildcard operator is supported |
Get list of all VM names | (\[for vm in x.vms {vm.name}\]) | #{[vm.name for vm in vms]}# | @{..name}@ @{.vms[*].name}@ |
Get list of all VMs that are of type t2.micro | (\[for vm in x.vms if vm.type \== "t2.micro" {vm}\]) | #{[vm for vm in x.vms if vm.type == "t2.micro"]}# | @{$.x.vms[?(@.type == "t2.micro")]}@ |
Get list of all VM ids that are in running status | (\[for vm in x.vms if vm.status \== "running" {vm.id}\]) | #{[vm.id for vm in x.vms if vm.status == "running"]}# | @{$.x.vms[?(@.status == "running")].id}@ |
Get list of all VMs that are in running status in dev environment | (\[for vm in x.vms if vm.status \== "running" && vm.tags.environment \== "dev" {vm}\]) | #{[vm for vm in x.vms if vm.status == "running" and vm.tags.environment == "dev"]}# | @{$.x.vms[?(@.status == "running" && @.tags.environment == "dev")]}@ |
Get only the first VM that is of type t2.micro | (\[for vm in x.vms if vm.type \== "t2.micro" {vm}\]\[0\]) | #{[vm for vm in x.vms if vm.type == "t2.micro"][0]}# | N/A |
Get map of all VMs with ids as keys and VM objects as values | ({for vm in x.vms {"\\(vm.id)": vm}}) | #{{vm.id: vm for vm in x.vms}}# | N/A |
Note:
- If the JSONPath expression begins with @{$.expression}@, the result will always be returned as a list, regardless of its length. This is the default behavior of JSONPath expressions
- If the JSONPath expression begins with @{expression}@, the result will be returned as an object if the result length is 1; otherwise, it will be returned as a list
Scripts¶
Users now have the flexibility to write scripts in addition to expressions, depending on the complexity of their requirements. Both scripts and expressions can be stored in variables, environment variables, or within file contents. Scripts are evaluated first, followed by expressions. This allows users to generate expressions within scripts that can be evaluated later.
The Environment Manager supports two types of scripts:
- Cue Script
- Starlark Script
Cue¶
A Cue script must begin with the syntax $$ script begin $$ and end with $$ script end $$ to define the code that the workflow engine will execute. Cue language does not support functions; it consists only of statements.
Cue variables defined within these statements are extracted as JSON output, whereas private Cue variables indicated by a prefix of # or _ are excluded from the output. Additionally, system variables introduced by the workflow engine, which also start with #, are treated as private and are not included in the output.
Refer here for more details: CUE
Example below:
$$ script begin $$
#activities: {for name, activity in #ctx.activities {"\(name)": activity}}
// this will generate the JSON array of all outputs of all activities combined.
// ["output1", "output2", "output3"]
[for name, _ in #activities for k, _ in #activities[name].output if k != "files" {k}]
// or, you can use the below statement to capture it as JSON object
// this will generate the JSON object of all outputs of all activities combined.
// {"all_outputs": ["output1", "output2", "output3"]}
all_outputs: [for name, _ in #activities for k, _ in #activities[name].output if k != "files" {k}]
$$ script end $$
Starlark¶
Starlark is a programming language that is a dialect of Python, making it easy for users to write scripts without needing to learn a new language. Scripts must begin with the syntax ## script begin ## and end with ## script end ##.
The workflow engine invokes a function named eval(**kwargs) with kwargs as an argument, where kwargs is a key-value dictionary. Users can define custom functions within the script and call them from the eval function.
The eval function must return a value of a primitive data type, a list, or a dictionary. These returned values are then utilized as input variables for the Environment Manager.
Refer here for more details: bazelbuild/starlark
Example below:
## script begin ##
# get_outputs is a function which gets the list of all outputs of all activities combined.
def get_outputs(ctx):
variables = []
for activity_name in ctx["activities"]:
outputs = ctx["activities"][activity_name].get("output", {})
for output in sorted(outputs):
if output == "files":
continue
variables.append(output)
return variables
# eval is mandatory, calls get_outputs, returns the list, this is then converted to JSON array in the worklfow engine.
def eval(**kwargs):
return get_outputs(kwargs["ctx"])
# or
# eval function calls get_outputs, returns the dict, this is then converted to JSON object in the worklfow engine.
def eval(**kwargs):
return {"all_outputs": get_outputs(kwargs["ctx"])}
## script end ##