Weekly GitHub Report for Kubernetes: March 16, 2026 - March 23, 2026 (19:49:34)
Weekly GitHub Report for Kubernetes
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is v1.32.3
1.2 Version Information:
The Kubernetes 1.32 release, announced on March 11, 2025, introduces key updates detailed in the official CHANGELOG, including new features and improvements accessible via additional binary downloads. For comprehensive information, users are encouraged to review the full release notes and changelog linked in the announcement.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
As of our latest update, there are no active issues with ongoing comments this week.
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
As of our latest update, there are no stale issues for the project this week.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 0
Summarized Issues:
As of our latest update, there are no open issues for the project this week.
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 6
Summarized Issues:
- Configuration Exposure in cloud-controller-manager: The cloud-controller-manager's /configz endpoint exposes its internal configuration type instead of a versioned API type, which is problematic for proper API versioning and compatibility. The issue suggests converting the internal configuration to a versioned type before passing it to the configz.Set() method to resolve this.
- issues/137427
- MemoryQoS Pod-Level Cgroup Settings Persistence: Pod-level
memory.mincgroup settings are not reset when the MemoryQoS feature is disabled or itsmemoryReservationPolicyis changed toNone, causing the memory reservation to persist incorrectly after kubelet restarts. This leads to inconsistent pod memory reservation states that do not reflect the current configuration. - issues/137674
- ServiceCIDR Status Field Management: The serviceCIDRStrategy's PrepareForCreate and PrepareForUpdate methods fail to clear or properly manage the status field of ServiceCIDR objects, which should be reset or preserved appropriately during creation and update operations. This improper handling can cause stale or incorrect status information to persist.
- issues/137680
- StatefulSet Pod Rebuild Triggered by Metadata Removal: Upgrading Kubernetes from version 1.31 to 1.34 caused StatefulSet pods to be rebuilt due to the removal of the volumeClaimTemplates.metadata.creationTimestamp field in the StatefulSet YAML, triggering a rolling update. This change unintentionally forces pod recreation even when no functional changes were made.
- issues/137705
- StatefulSet Pod Template Validation Missing: A StatefulSet can be created without specifying a container image in the pod template, causing the API server to accept the resource without error while the StatefulSet controller fails to create pods. This highlights a lack of podSpec validation that should reject such StatefulSets at admission time to prevent runtime failures.
- issues/137728
- Static Pod Test Failures Due to Timeout and Timing Issues: Static pod tests in Kubernetes are failing due to a timeout error in the e2e_node static pod test, where a function passed to Eventually failed because two time values unexpectedly matched. This timing issue has caused test failures across multiple recent CI runs, affecting test reliability.
- issues/137737
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 0
As of our latest update, there are no open pull requests for the project this week.
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 54
Key Closed Pull Requests
1. DRAFT Integration tests for tas: This pull request is a draft aimed at adding integration tests for the TAS (Topology-Aware Scheduling) feature in Kubernetes, including various cherry-picked commits to support placement-based scheduling algorithms, plugin usage, and GangScheduling enhancements.
- URL: pull/137565
- Associated Commits: 886e2, d2e87, 9fdfa, 9d971, b42d1, 5f197, c6b96, 36660, 3d2e0, 4b573, 2d6f9, 19a8c, 4728d, dec09, 83dc3, 0433f
- Associated Commits: 886e2, d2e87, 9fdfa, 9d971, b42d1, 5f197, c6b96, 36660, 3d2e0, 4b573, 2d6f9, 19a8c, 4728d, dec09, 83dc3, 0433f
2. [WIP] KEP-5732: Add SchedulingConstraints to PodGroup API and use them in TopologyPlacement plugin: This pull request adds a SchedulingConstraints field to the PodGroup API as specified in KEP-5732 and integrates these constraints to enforce topology-aware workload scheduling within the TopologyPlacement plugin.
- URL: pull/137271
3. WIP: Introduce support of DRA for Native Resources: This pull request introduces support for Dynamic Resource Allocation (DRA) of native resources in Kubernetes by adding feature gates, API fields, scheduler and kubelet enhancements to account for native resource claims in pod scheduling, validation, and admission processes.
- URL: pull/136725
Other Closed Pull Requests
- PodGroup and Workload APIs implementation and enhancements: Multiple pull requests implement the PodGroup admission controller, status features, and integration with the Kubernetes Job controller to support gang-scheduling. These include updates to the kube-scheduler, removal of older APIs, addition of protection controllers, and status condition improvements to enhance workload-aware scheduling as specified in KEP-5832.
- [pull/137464, pull/137032, pull/137611, pull/137564, pull/137073]
- PodGroup scheduling improvements and state management: Pull requests introduce the PodGroupCount placement score plugin and snapshot the PodGroup state before scheduling cycles to improve scheduling consistency and prioritization. They also add a PodGroupScheduled condition to indicate scheduling success or failure, enhancing the scheduler's workload-aware capabilities.
- [pull/137488, pull/137073, pull/137611]
- Dynamic Resource Allocation (DRA) features and APIs: Several pull requests add new features to DRA, including the ReconcileOnlyPoolName for per-pool reconciliation, the ResourcePoolStatusRequest API for querying resource pool availability, and support for list types in DeviceAttribute API with enhanced CEL functions. These changes improve resource allocation visibility, filtering, and attribute handling.
- [pull/137365, pull/137028, pull/137190]
- Device Resource Allocation (DRA) feature promotion and testing improvements: Pull requests promote the DRAPartitionableDevices feature to beta and improve DRA integration testing by isolating tests with unique driver names and resource claims, splitting tests for parallel execution to reduce flakiness and test time.
- [pull/137350, pull/137647]
- Kubelet and container lifecycle fixes and enhancements: Pull requests fix image pulling authorization issues, relax validation for in-place resizing of non-sidecar initContainers, and resolve a bug where containers with sidecars and startupProbes fail to restart properly. These changes include adding end-to-end tests and improving container state handling to prevent pods from being stuck.
- [pull/137629, pull/137352, pull/137146]
- CRI streaming and resource listing improvements: A pull request implements KEP-5825 by adding server-side and client-side streaming RPCs to the CRI RuntimeService and ImageService, enabling efficient streaming of container and pod list operations to bypass gRPC message size limits, controlled by a new feature gate CRIListStreaming.
- [pull/136987]
- Scheduler bug fixes and gang scheduling coordination: A pull request fixes a scheduler bug where pods scheduled as a gang sharing ResourceClaims were blocked due to pending allocations by reusing pending allocations during PreFilter and coordinating allocation status updates across all pods in the gang.
- [pull/137641]
- Horizontal Pod Autoscaler (HPA) test fix: A pull request fixes the flaky HPAConfigurableTolerance end-to-end test by implementing deterministic per-pod CPU load distribution that bypasses kube-proxy, ensuring accurate CPU utilization measurements for autoscaling decisions.
- [pull/137612]
- StatefulSet rollout bug fix: An automated cherry pick fixes a regression in the Parallel pod management policy by excluding old broken pods from the maxUnavailable budget, improving StatefulSet rollout behavior.
- [pull/137667]
- ServiceCIDR status field fix: A pull request fixes improper wiping of the ServiceCIDR status field by implementing consistent status field wiping behavior aligned with other Kubernetes APIs and adding a feature gate to optionally disable this behavior.
- [pull/137715]
- Leader election release behavior restoration: A pull request links the ReleaseOnCancel logic to a feature gate to restore original leader election exit behavior when the new release on exit feature is disabled, enabling testing without impacting previous behavior.
- [pull/137708]
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
| Contributor | Commits | Pull Requests | Issues | Comments |
|---|---|---|---|---|
| pohly | 36 | 2 | 2 | 0 |
| pacoxu | 30 | 5 | 1 | 0 |
| brejman | 29 | 2 | 0 | 0 |
| luxas | 28 | 3 | 0 | 0 |
| vinayakray19 | 24 | 1 | 0 | 0 |
| aaron-prindle | 20 | 2 | 0 | 0 |
| Jefftree | 14 | 4 | 0 | 0 |
| tosi3k | 17 | 1 | 0 | 0 |
| dims | 10 | 6 | 0 | 0 |
| tallclair | 15 | 1 | 0 | 0 |
Access Last Week's Newsletter: