Setup
In this exercise, you will provision an Kafka Helm chart and test the autoscaling of Kafka with KEDA.
Important
This tutorial describes the steps using the Rafay Web Console. The entire workflow can also be fully automated and embedded into an automation pipeline
Watch a video of the steps below
Assumptions¶
You have already provisioned or imported a Kubernetes cluster into your Rafay Org and created a blueprint with KEDA
Step 1: Create Namespace¶
- Login into the Web Console
- Navigate to Infrastructure -> Namespaces
- Create a new namespace, specify the name (e.g. kafka) and select type as Wizard
- In the placement section, select a cluster
- Click Save & Go to Publish
- Publish the namespace
Step 2: Create Kafka Add-on¶
- Navigate to Infrastructure -> Add-Ons
- Select New Add-On -> Create New Add-On from Catalog
- Search for kafka
- Select kafka from default-bitnami
- Select Create Add-On
- Enter a name for the add-on
- Specify the namespace (e.g. kafka and select the namespace created as part of the previous step
- Click Create
- Enter a version name
- Upload the following helm values
persistence:
enabled: false
listeners:
client:
containerPort: 9092
protocol: PLAINTEXT
name: CLIENT
sslClientAuth: ""
- Click Save Changes
Step 3: Update Blueprint¶
- Navigate to Infrastructure -> Blueprints
- Edit the previously created KEDA blueprint
- Enter a version name
- Click Configure Add-Ons
- Select the previously created Kafka add-on
- Click Save Changes
- Click Save Changes
Step 4: Apply Blueprint¶
- Navigate to Infrastructure -> Clusters
- Click the gear icon on your cluster and select Update Blueprint
- Select the previously updated blueprint
- Click Save and Publish
After a few seconds, the blueprint with the KEDA and Kafka add-ons will be published on the cluster.
Step 5: Verify deployment¶
- Navigate to Infrastructure -> Clusters
- Click KUBECTL on your cluster
- Type the following command
kubectl get all -n kafka
Step 6: Test Scaling¶
Create Kafka Consumer¶
- Create a file named consumer.yaml with the following contents
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-consumer
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka-consumer
template:
metadata:
labels:
app: kafka-consumer
spec:
containers:
- name: consumer
image: edenhill/kcat:1.7.1
command: ["/bin/sh", "-c"]
args:
- kcat -C -b kafka.kafka.svc.cluster.local:9092 -G my-consumer-group test-topic
- Type the following command to create the resource
kubectl apply -f consumer.yaml
- Type the following command to validate the resource was created
NAME READY UP-TO-DATE AVAILABLE AGE kafka-consumer 0/0 0 0 9m19s
kubectl get deployments -n kafka `` You will see output similar to the following:
Create Kafka ScaledObject¶
- Create a file named scaledobject.yaml with the following contents
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: kafka-consumer-scaler
namespace: kafka
spec:
scaleTargetRef:
name: kafka-consumer
minReplicaCount: 0
maxReplicaCount: 5
triggers:
- type: kafka
metadata:
bootstrapServers: kafka-controller-headless.kafka.svc.cluster.local:9092
topic: test-topic
consumerGroup: my-consumer-group
lagThreshold: "5"
offsetResetPolicy: latest
- Type the following command to create the CRD
kubectl apply -f scaledobject.yaml
- Type the following command to validate the resource was created
You will see output similar to the following:
kubectl get scaledobjects -n kafka
NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
kafka-consumer-scaler apps/v1.Deployment kafka-consumer 0 5 kafka True False Unknown 4s
Create Kafka Producer¶
- Create a file named producer.yaml with the following contents
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-producer
namespace: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka-producer
template:
metadata:
labels:
app: kafka-producer
spec:
containers:
- name: producer
image: edenhill/kcat:1.7.1
command: ["/bin/sh", "-c"]
args:
- while true; do echo "test-message" | kcat -P -b kafka.kafka.svc.cluster.local:9092 -t test-topic; sleep 2; done
- Type the following command to create the CRD
kubectl apply -f producer.yaml
- Type the following command to validate the resource was created
You will see output similar to the following showing both the producer and the scaled consumer.
kubectl get deployments -n kafka
NAME READY UP-TO-DATE AVAILABLE AGE
kafka-consumer 1/1 1 1 13m
kafka-producer 1/1 1 1 22s
The number of replicas for the producer can be scaled, this will increase the lag and cause KEDA to scale the consumer pods to keep up with the producers.