Skip to content

Index

Understanding ArgoCD Reconciliation: How It Works, Why It Matters, and Best Practices

ArgoCD is a powerful GitOps controller for Kubernetes, enabling declarative configuration and automated synchronization of workloads. One of its core functions is reconciliation, a continuous process by which ArgoCD ensures that the live state of a Kubernetes cluster matches the desired state defined in a Git repository.

While this might sound straightforward, reconciliation plays a critical role in the GitOps lifecycle, and its default behavior can be surprisingly aggressive. In this blog post, we’ll explore:

  • What reconciliation in ArgoCD actually does
  • Why it exists and how it ensures cluster integrity
  • The pitfalls of the default timer
  • Best practices for tuning reconciliation to balance responsiveness and resource efficiency

Info

In a related blog, we describe how customers using Rafay are able to Block Drift in the first place.

ArgoCD Reconciliation

Custom GPU Resource Classes in Kubernetes

In the modern era of containerized machine learning and AI infrastructure, GPUs are a critical and expensive asset. Kubernetes makes scheduling and isolation easier—but managing GPU utilization efficiently requires more than just assigning something like

nvidia.com/gpu: 1

In this blog post, we will explore what custom GPU resource classes are, why they matter, and when to use them for maximum impact. Custom GPU resource classes are a powerful technique for fine-grained GPU management in multi-tenant, cost-sensitive, and performance-critical environments.

Info

If you are new to GPU sharing approaches, we recommend reading the following introductory blogs: Demystifying Fractional GPUs in Kubernetes and Choosing the Right Fractional GPU Strategy.

Choosing the Right Fractional GPU Strategy for Cloud Providers

As demand for GPU-accelerated workloads soars across industries, cloud providers are under increasing pressure to offer flexible, cost-efficient, and isolated access to GPUs. While full GPU allocation remains the norm, it often leads to resource waste—especially for lightweight or intermittent workloads.

In the previous blog, we described the three primary technical approaches for fractional GPUs. In this blog, we'll explore the most viable approaches to offering fractional GPUs in a GPU-as-a-Service (GPUaaS) model, and evaluate their suitability for cloud providers serving end customers.

Demystifying Fractional GPUs in Kubernetes: MIG, Time Slicing, and Custom Schedulers

As GPU acceleration becomes central to modern AI/ML workloads, Kubernetes has emerged as the orchestration platform of choice. However, allocating full GPUs for many real-world workloads is an overkill resulting in under utilization and soaring costs.

Enter the need for fractional GPUs: ways to share a physical GPU among multiple containers without compromising performance or isolation.

In this post, we'll walk through three approaches to achieve fractional GPU access in Kubernetes:

  1. MIG (Multi-Instance GPU)
  2. Time Slicing
  3. Custom Schedulers (e.g., KAI)

For each, we’ll break down how it works, its pros and cons, and when to use it.

The Rise of AI Agents: From Zero to Production

Artificial Intelligence (AI) has moved far beyond simple chat bots and rigid automation. At the frontier of this evolution lies a powerful new paradigm—AI Agents. These autonomous, intelligent programs can understand their environment, reason through complex problems, and take meaningful actions.

Whether you’re a developer, product leader, or startup founder, understanding AI agents isn't just a competitive advantage—it’s a necessity. In this blog, we will attempt to decipher agents, how they are different from regular applications and how you can build them.

AI Agents

Configure and Manage GPU Resource Quotas in Multi-Tenant Clouds

In multi-tenant GPU cloud environments, effective resource management is critical to ensure fair usage and prevent contention. GPU resource quotas allow organizations to allocate computing capacity at multiple levels—across the entire organization, at individual project scopes, and even down to the per-user level. In this blog, we will describe how GPU Clouds can provide fine grained control of limited resources to their tenants and their admins.

Per Project and User Quotas

Enforcing ServiceNow-Based Approvals with Rafay

Enterprises often require explicit approvals before critical actions can proceed especially when provisioning infrastructure or making configuration changes. With Rafay’s out-of-the-box (OOB) workflow handlers, customers can easily integrate with popular ITSM systems such as ServiceNow (SNOW).

Catalog

This post explains how to configure and use Rafay’s ServiceNow Workflow Handler to enforce approval gates.


Workflow Handlers in Rafay

Rafay enables platform teams to attach Workflow Handlers to key actions as pre-hooks or post-hooks:

  • Pre-hook Handlers: Triggered before an action (e.g., pause provisioning until approval is received)
  • Post-hook Handlers: Triggered after an action (e.g., notify stakeholders after infrastructure (environment) creation)

Typical Scenarios

Here are a few use cases where ServiceNow-based approvals come into play:

  • Developers request a vCluster to test their app before raising a PR
  • Platform admins initiate a Kubernetes upgrade for a fleet of clusters that requires approval

Self-Service Slurm Clusters on Kubernetes with Rafay GPU PaaS

In the previous blog, we discussed how Project Slinky bridges the gap between Slurm, the de facto job scheduler in HPC, and Kubernetes, the standard for modern container orchestration.

Project Slinky and Rafay’s GPU Platform-as-a-Service (PaaS) combined provide enterprises and cloud providers with a transformative combination that enables secure, multi-tenant, self-service access to Slurm-based HPC environments on shared Kubernetes clusters. Together, they allow cloud providers and enterprise platform teams to offer Slurm-as-a-Service on Kubernetes—without compromising on performance, usability, or control.

Design

Project Slinky: Bringing Slurm Scheduling to Kubernetes

As high-performance computing (HPC) environments evolve, there’s an increasing demand to bridge the gap between traditional HPC job schedulers and modern cloud-native infrastructure. Project Slinky is an open-source project that integrates Slurm, the industry-standard workload manager for HPC, with Kubernetes, the de facto orchestration platform for containers.

This enables organizations to deploy and operate Slurm-based workloads on Kubernetes clusters allowing them to leverage the best of both worlds: Slurm’s mature, job-centric HPC scheduling model and Kubernetes’s scalable, cloud-native runtime environment.

Project Slinky