Skip to content

DRA

Enable Dynamic Resource Allocation (DRA) in Kubernetes

In the previous blog, we introduced the concept of Dynamic Resource Allocation (DRA) that just went GA in Kubernetes v1.34 which was released in August 2025.

In this post, weโ€™ll will configure DRA on a Kubernetes 1.34 cluster.

Info

We have optimized the steps for users to experience this on their macOS or Windows laptops in less than 15 minutes. The steps in this blog are optimized for macOS users.

Introduction to Dynamic Resource Allocation (DRA) in Kubernetes

In the previous blog, we reviewed the limitations of Kubernetes GPU scheduling. These often result in:

  1. Resource fragmentation โ€“ large portions of GPU memory remain idle and unusable.
  2. Topology blindness โ€“ multi-GPU workloads may be scheduled suboptimally.
  3. Cost explosion โ€“ teams overprovision GPUs to work around scheduling inefficiencies.

In this post, weโ€™ll look at how a new GA feature in Kubernetes v1.34 โ€” Dynamic Resource Allocation (DRA) โ€” aims to solve these problems and transform GPU scheduling in Kubernetes.

Rethinking GPU Allocation in Kubernetes

Kubernetes has cemented its position as the de-facto standard for orchestrating containerized workloads in the enterprise. In recent years, its role has expanded beyond web services and batch processing into one of the most demanding domains of all: AI/ML workloads.

Organizations now run everything from lightweight inference services to massive, distributed training pipelines on Kubernetes clusters, relying heavily on GPU-accelerated infrastructure to fuel innovation.

But thereโ€™s a problem. In this blog, we will explore why the current model falls short, what a more advanced GPU allocation approach looks like, and how it can unlock efficiency, performance, and cost savings at scale.