Kubernetes v1.36 Sneak Peek: HPA Scale-to-Zero, In-Place Pod Resize, and Ephemeral Image Tokens
The Kubernetes project published its v1.36 sneak peek targeting an April 22 release. Thirty-six enhancements graduate, led by HPA scale-to-zero enabled by default, in-place pod resize at the pod level, and short-lived image pull tokens replacing static secrets.
The Kubernetes project published its v1.36 sneak peek on March 30, targeting a release date of April 22, 2026. Thirty-six enhancements graduate in this cycle — a number that reflects the maturation of features that have been baking in alpha and beta for multiple releases. Three stand out as immediately impactful for production clusters.
HPA Scale-to-Zero Goes Default
Horizontal Pod Autoscaler scale-to-zero has been the most requested HPA feature since the controller shipped. v1.36 enables it by default. Workloads with an HPA configured can now scale down to zero replicas when idle and scale back up on incoming traffic.
The practical implication: serverless-style economics on standard Kubernetes without a separate operator. Batch jobs, webhook processors, event-driven consumers, and dev-environment deployments can now sit at zero cost between bursts without maintaining a dedicated warm pool.
The feature requires a traffic-generating trigger to wake the deployment — typically an external event source like KEDA or a custom metrics adapter. The HPA itself handles the scaling; you configure the metrics, HPA does the rest.
In-Place Pod Resize Extended to Pod Level
In-place pod resize — the ability to change CPU and memory limits on a running pod without killing and recreating it — moves to the pod level in v1.36. Previously the feature operated at the container level within a pod. This matters because multi-container pods with shared resource pools can now be resized with a single operation rather than per-container patches.
For database workloads, sidecar-heavy service meshes, and anything that treats pod restarts as costly (slow startup, pre-warming, cache invalidation), this reduces operational friction significantly. Memory-intensive workloads that need to scale vertically under load without downtime are the clearest beneficiaries.
Ephemeral Service Account Tokens for Image Pulls
Static image pull secrets have been a security antipattern for years: long-lived credentials stored in Kubernetes Secrets, reused across namespaces, often with broader registry permissions than any individual workload needs. v1.36 introduces ephemeral short-lived Service Account tokens for image pulls, replacing static secrets entirely for supported registries.
The tokens are scoped to the workload, expire after a configurable short window, and rotate automatically. The practical result: a compromised node no longer gives an attacker reusable registry credentials that persist after the incident is resolved.
This requires registry-side support for token exchange (OIDC or Workload Identity Federation). AWS ECR, GCP Artifact Registry, and Azure ACR all support compatible flows. Self-hosted Harbor and Quay support it with additional configuration.
Other Notable Enhancements
CSI volume attachment limits in Cluster Autoscaler: The autoscaler now respects per-node CSI volume attachment maximums when selecting nodes for pending pods. Previously, autoscaler could select a node type that couldn’t actually attach the required volumes, causing pods to remain pending despite autoscaling. This eliminates an entire class of confusing “why isn’t my pod scheduling” incidents.
Improved Linux/Windows mixed-node support: Scheduling constraints for mixed OS node pools become more ergonomic, reducing the boilerplate node selector and toleration configurations required for Windows workloads.
Structured auth config v2: The structured authentication configuration API (alpha since v1.30) graduates to beta, providing a declarative alternative to kubeapiserver flags for configuring OIDC authenticators.
Release Timeline
- Code freeze: April 9
- RC.1: April 14
- GA release: April 22
The full changelog and beta/alpha feature list is available at kubernetes.io/releases. If you’re running a managed Kubernetes service, expect your cloud provider to offer v1.36 within 4–8 weeks of GA depending on their qualification cycle.