This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Workload API

FEATURE STATE: Kubernetes v1.35 [alpha](disabled by default)

The Workload API resource allows you to describe the scheduling requirements and structure of a multi-Pod application. While workload controllers provide runtime behavior for the workloads, the Workload API is supposed to provide scheduling constraints for the "true" workloads, such as Job and others.

What is a Workload?

The Workload API resource is part of the scheduling.k8s.io/v1alpha1 API group (and your cluster must have that API group enabled, as well as the GenericWorkload feature gate, before you can benefit from this API). This resource acts as a structured, machine-readable definition of the scheduling requirements of a multi-Pod application. While user-facing workloads like Jobs define what to run, the Workload resource determines how a group of Pods should be scheduled and how its placement should be managed throughout its lifecycle.

API structure

A Workload allows you to define a group of Pods and apply a scheduling policy to them. It consists of two sections: a list of pod groups and a reference to a controller.

Pod groups

The podGroups list defines the distinct components of your workload. For example, a machine learning job might have a driver group and a worker group.

Each entry in podGroups must have:

  1. A unique name that can be used in the Pod's Workload reference.
  2. A scheduling policy (basic or gang).
apiVersion: scheduling.k8s.io/v1alpha1
kind: Workload
metadata:
  name: training-job-workload
  namespace: some-ns
spec:
  controllerRef:
    apiGroup: batch
    kind: Job
    name: training-job
  podGroups:
  - name: workers
    policy:
      gang:
        # The gang is schedulable only if 4 pods can run at once
        minCount: 4

Referencing a workload controlling object

The controllerRef field links the Workload back to the specific high-level object defining the application, such as a Job or a custom CRD. This is useful for observability and tooling. This data is not used to schedule or manage the Workload.

What's next

1 - Pod Group Policies

FEATURE STATE: Kubernetes v1.35 [alpha](disabled by default)

Every pod group defined in a Workload must declare a scheduling policy. This policy dictates how the scheduler treats the collection of Pods.

Policy types

The API currently supports two policy types: basic and gang. You must specify exactly one policy for each group.

Basic policy

The basic policy instructs the scheduler to treat all Pods in the group as independent entities, scheduling them using the standard Kubernetes behavior.

The main reason to use the basic policy is to organize the Pods within your Workload for better observability and management.

This policy can be used for groups of a Workload that do not require simultaneous startup but logically belong to the application, or to open the way for future group constraints that do not imply "all-or-nothing" placement.

policy:
  basic: {}

Gang policy

The gang policy enforces "all-or-nothing" scheduling. This is essential for tightly-coupled workloads where partial startup results in deadlocks or wasted resources.

This can be used for Jobs or any other batch process where all workers must run concurrently to make progress.

The gang policy requires a minCount parameter:

policy:
  gang:
    # The number of Pods that must be schedulable simultaneously
    # for the group to be admitted.
    minCount: 4

What's next