Skip to content

Mohan Atreya

Adding New Language Support to the Self Service Portal in 5 Mins

GPU Cloud Providers and enterprises serving a global user base need the end user facing Self Service Portal to speak their end users' language — literally. If you're serving AI researchers in Paris, data scientists in Montreal, or ML engineers across Francophone Africa, offering the portal in French is a powerful way to reduce friction and make GPU consumption feel native.

The Rafay Platform's Language Customization feature makes it straightforward for admins to add French (or any other language), customize translations, and give end users the ability to switch languages on their own. In this post, we'll walk through the entire process of adding French to the Self Service Portal — from configuring the default locale to verifying the end user experience.

New Language

Developer Pods for Platform Teams: Designing the Right Self-Service GPU Experience

In Part 1, we discussed the core problem: most organizations still deliver GPU access through the wrong abstraction. Developers and data scientists do not want tickets, YAML, and long provisioning cycles. They want a ready-to-use environment with the right amount of compute, available when they need it.

In Part 2, we looked at what that self-service experience feels like for the end user: a familiar, guided workflow that lets them select a profile, launch an environment, and SSH into it in about 30 seconds.

In this part, we shift to the other side of the experience: how platform teams design that experience in the first place. Specifically, we will look at how teams can configure and customize a Developer Pod SKU using the integrated SKU Studio in the Rafay Platform.

SKU in Rafay Platform

Developer Pods: A Self-Service GPU Experience That Feels Instant

In Part 1, we discussed the core problem: most organizations still deliver GPU access through the wrong abstraction. Developers do not want tickets, YAML, and long wait times. They want a working environment with the right tools and GPU access, available when they need it.

In this post, let’s look at the other half of the story: the end-user experience. Specifically, what does self-service actually look like for a developer or data scientist using Rafay Developer Pods?

The answer is simple: a familiar UI, a few guided choices, and a running environment they can SSH into in about 30 seconds.

New Developer Pod

Instant Developer Pods: Rethinking GPU Access for AI Teams

It's the week of KubeCon Europe 2026 in Amsterdam. Much of the conversations will be about Kubernetes, AI and GPUs. Let's have a honest discussion.

We are in 2026 and we’re still handing out infrastructure like it’s 2008. The entire workflow is slow, expensive and wildly inefficient. Meanwhile, your most expensive resource—GPUs—sit idle or underutilized.

The way most enterprises deliver GPU access today is completely misaligned with how developers and data scientists actually work. A developer wants to:

  • Run a PyTorch experiment
  • Fine-tune a model
  • Test a pipeline

What do they get instead?

A ticketing system with a multi day wait time and then finally a bloated VM or an entire bare-metal GPU server

There has to be a better way. This is the first part of a blog series on Rafay's Developer Pods. In this, we will describe why and how many of our customers have completely transformed the way they deliver their end users with a self service experience to GPUs.

Dev Pod

How Rafay and NVIDIA Help Neoclouds Monetize Accelerated Computing with Token Factories

The AI boom has created an unprecedented demand for GPUs. In response, a new generation of GPU-first cloud providers purpose-built for AI workloads—known as neoclouds—has emerged to deliver the AI infrastructure needed to power AI applications.

However, a critical shift is happening in the market. Selling raw GPU infrastructure is no longer enough. The real opportunity lies in turning GPU capacity into AI services. Developers and enterprises don't want GPUs. They want models, APIs, and intelligence on demand.

With Rafay's Token Factory offering, Neoclouds can transform GPU clusters into a self-service AI platform that exposes models through token-metered APIs. The result is a marketplace where neoclouds monetize infrastructure, model developers reach users, and developers build applications, all on the same platform.

This is where Rafay and NVIDIA have come together to unlock a powerful new business model for AI infrastructure providers.

End User Portal Token Factory

Scaling Trust: The Fortanix and Rafay Integration for Enterprise Confidential AI

In the modern enterprise, Artificial Intelligence (AI) has moved from a "nice-to-have" experimental phase to a core business driver. However, for organizations in highly regulated sectors—such as banking, healthcare, and government—the path to AI adoption is fraught with security hurdles.

The primary concern is protecting sensitive data not just at rest or in transit, but in use. In the image below, the app uses a proprietary model which needs to be secured using confidential computing.

Confidential VM

Traditional security measures often fall short when data must be decrypted to be processed by an AI model. This is where Confidential Computing changes the game, and why the joint integration between Fortanix and Rafay is a landmark development for the "AI Factory" of the future.

NVIDIA AICR Generates It. Rafay Runs It. Your GPU Clusters, Finally Under Control

Deploying GPU-accelerated Kubernetes infrastructure for AI workloads has never been simple. Administrators face a relentless compatibility matrix i.e. matching GPU driver versions to CUDA releases, pinning Kubernetes versions to container runtimes, tuning configurations differently for NVIDIA H100s versus A100s, and doing all of it differently again for training versus inference.

One wrong version combination and workloads fail silently, or worse, perform far below hardware capability. For years, the answer was static documentation, tribal knowledge, and hoping that whoever wrote the runbook last week remembered to update it.

NVIDIA's AI Cluster Runtime (AICR) and the Rafay Platform represent a new approach — one where GPU infrastructure configuration is treated as code, generated deterministically, validated against real hardware, and enforced continuously across fleets of clusters.

Together, they cover the full lifecycle from first aicr snapshot to production-grade day-2 operations, with cluster blueprints as the critical bridge between the two.

Baton Pass

From Slurm to Kubernetes: A Guide for HPC Users

If you've spent years submitting batch jobs with Slurm, moving to a Kubernetes-based cluster can feel like learning a new language. The concepts are familiar — resource requests, job queues, priorities — but the vocabulary and tooling are different. This guide bridges that gap, helping HPC veterans understand how Kubernetes handles workloads and what that means day-to-day.

SLurm to k8s

Run nvidia-smi on Remote GPU Kubernetes Clusters Using Rafay Zero Trust Access

Infra operators managing GPU-enabled Kubernetes clusters often need a fast and secure way to validate GPU visibility, driver health, and runtime readiness without exposing the cluster directly or relying on bastion hosts, VPNs, or manually managed kubeconfigs.

With Rafay's zero trust kubectl, operators can securely access remote Kubernetes resources and execute commands inside running pods from the Rafay platform. A simple but powerful example is running nvidia-smi inside a GPU Operator pod to confirm that the NVIDIA driver stack, CUDA runtime, and GPU devices are functioning correctly on a remote cluster.

In this post, we walk through how infra operators can use Rafay's zero trust access workflow to run nvidia-smi on a remote GPU-based Kubernetes cluster.

Nvidia SMI over ZTKA

How Rafay Helps GPU Clouds Run Complex Hackathons at Scale

Running a hackathon is hard. Running a GPU-powered hackathon for thousands of participants — where every developer needs a fully configured environment (notebooks, developer pod etc) with dedicated GPU resources, ready to go the moment the event kicks off — is an entirely different class of problem. This is exactly where Rafay's platform has helped change the game for GPU Cloud providers.

GuardDuty