Weekly Project News

Subscribe
Archives

Weekly GitHub Report for Kubernetes: November 10, 2025 - November 17, 2025 (12:04:45)

Weekly GitHub Report for Kubernetes

Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.


Table of Contents

  • I. News
    • 1.1. Recent Version Releases
    • 1.2. Other Noteworthy Updates
  • II. Issues
    • 2.1. Top 5 Active Issues
    • 2.2. Top 5 Stale Issues
    • 2.3. Open Issues
    • 2.4. Closed Issues
    • 2.5. Issue Discussion Insights
  • III. Pull Requests
    • 3.1. Open Pull Requests
    • 3.2. Closed Pull Requests
    • 3.3. Pull Request Discussion Insights
  • IV. Contributors
    • 4.1. Contributors

I. News

1.1 Recent Version Releases:

The current version of this repository is v1.32.3

1.2 Version Information:

The Kubernetes 1.32 release, announced on March 11, 2025, introduces key updates and improvements detailed in the official CHANGELOG, with additional binary downloads available. This version continues the trend of enhancing cluster functionality and stability, as highlighted in the release notes.

II. Issues

2.1 Top 5 Active Issues:

We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.

  1. kubelet marks the container Ready shortly which is killed for failing in StartupProbe: This issue describes a scenario where a Kubernetes pod configured with a StartupProbe and LivenessProbe but without a ReadinessProbe is incorrectly marked as Ready shortly before the container is killed due to StartupProbe failure. The expected behavior is that the container should remain in a NotReady state until it successfully passes the StartupProbe, but currently, the kubelet sets the container as Ready even when the StartupProbe fails and the container is terminated.

    • The discussion in the comments centers around attempts to reproduce the issue using the provided test application and deployment manifest, with one contributor unable to reproduce the Ready status being set at the API level despite the StartupProbe failure. Clarifications were sought on whether the Ready condition was actually observed as true in the pod status or only seen in kubelet logs, and it was noted that kubelet code sets Ready to true if no ReadinessProbe exists, which may be incorrect behavior. The conversation also touched on the impact of this behavior on deployment readiness and the need for further information to confirm the exact observed behavior.
    • Number of comments this week: 9
  2. hostname -f fails to return FQDN in IPv6-only environments when using images compiled with musl libc: This issue describes a problem where the hostname -f command fails to return the fully qualified domain name (FQDN) in IPv6-only environments when using container images compiled with musl libc. The root cause is that musl libc does not handle hostname resolution properly if the /etc/hosts file lacks an IPv4 address entry and DNS resolution fails, which is common in IPv6-only setups.

    • The comments focus on acknowledging the issue and the need to report it upstream to the musl libc project for a proper fix, while also documenting it within the Kubernetes repository to help users encountering the problem understand its cause and track progress.
    • Number of comments this week: 4
  3. fd leak in kube-apiserver: This issue reports a file descriptor leak in the kube-apiserver, where numerous file descriptors remain unreclaimed during operation, particularly when a pod enters a CrashLoopBackOff state and commands are executed inside the container. The reporter expects that no unused or unreclaimed file descriptors should persist, and provides steps to reproduce the problem along with environment details such as Kubernetes version, OS, and container runtime.

    • The comments indicate that the issue has been categorized under the API machinery SIG and apiserver area, assigned to a contributor, and accepted for triage, showing initial acknowledgment and prioritization by the maintainers.
    • Number of comments this week: 3
  4. cpu.weight for containers cgroup not correct when pod-level resources are resized for guaranteed pods: This issue addresses a problem where the cpu.weight value for container cgroups is not correctly updated when pod-level resources are resized for guaranteed pods in Kubernetes. Specifically, instead of being recalculated based on pod-level requests and limits, the cpu.weight remains set to 1, which is incorrect behavior.

    • The comments include triage labels and assignments indicating the issue has been accepted and assigned for investigation, along with tagging the relevant SIG node team for awareness and collaboration.
    • Number of comments this week: 3
  5. HostAliases doesn't allow FQDN: This issue addresses the inability to specify a fully qualified domain name (FQDN) with a trailing dot in the hostAliases field due to validation restrictions. The user wants to add an entry like "example.com." to the pod's /etc/hosts to prevent the use of search paths when accessing the domain, but the current validation logic disallows hostnames with trailing dots.

    • The comments label the issue under the network SIG and categorize it as a feature request, while also tagging relevant maintainers for awareness and potential action.
    • Number of comments this week: 2

2.2 Top 5 Stale Issues:

We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.

  1. Zone-aware down scaling behavior: This issue describes a problem with the horizontal pod autoscaler (HPA) scale-in behavior in Kubernetes where the expected zone-aware distribution of pods, governed by topology spread constraints with a maxSkew of 1, is not maintained. Specifically, during scale-in events, pods become unevenly distributed across availability zones, resulting in one zone having significantly fewer pods and causing high CPU usage on the remaining pod in that zone, contrary to the intended balanced spread.
  2. apimachinery's unstructured converter panics if the destination struct contains private fields: This issue describes a panic occurring in the apimachinery's DefaultUnstructuredConverter when it attempts to convert an unstructured object into a Go struct that contains private (non-exported) fields. The panic is caused by the converter trying to set values on unexported fields via reflection, and the reporter expects the converter to skip private fields instead of panicking, offering to submit a fix if this behavior is confirmed as a bug.
  3. Integration tests for kubelet image credential provider: This issue proposes adding integration tests for the kubelet image credential provider, similar to the existing tests for client-go credential plugins. It suggests that since there are already integration tests for pod certificate functionality, implementing tests for the kubelet credential plugins would be a logical and beneficial extension.
  4. conversion-gen generates code that leads to panics when fields are accessed after conversion: This issue describes a bug in the conversion-gen tool where it generates incorrect conversion code for structs that have changed field types between API versions, specifically causing unsafe pointer conversions instead of proper recursive calls. As a result, accessing certain fields like ExclusiveMaximum after conversion leads to runtime panics, highlighting the need for conversion-gen to produce safe and correct conversion functions.
  5. Failure cluster [ff7a6495...] TestProgressNotify fails when etcd in k/k upgraded to 3.6.2: This issue describes a failure in the TestProgressNotify test that occurs when the etcd component in the Kubernetes project is upgraded to version 3.6.2. The test times out after 30 seconds waiting on a result channel, with error logs indicating that the embedded etcd server fails to set up serving due to closed network connections and server shutdowns.

2.3 Open Issues

This section lists, groups, and then summarizes issues that were created within the last week in the repository.

Issues Opened This Week: 18

Summarized Issues:

  • Scheduling and Pod Readiness Issues: Several issues highlight problems with pod scheduling and readiness detection, including incorrect handling of non-numeric node taints causing improper unschedulable marking, and kubelet marking containers as Ready just before killing them due to StartupProbe failures. These issues lead to inconsistent pod readiness status and unnecessary scheduling failures.
  • [issues/135243, issues/135248]
  • Resource and Performance Management: Problems with resource accounting and container resource settings are reported, such as incorrect cpu.weight assignment for container cgroups and a race condition in pod admission logic causing rejected pods to inflate resource usage calculations. These issues can cause improper resource allocation and pod rejection.
  • [issues/135260, issues/135296]
  • Network and Connectivity Problems: Issues with kube-proxy failing to generate iptables rules when EndpointSlices contain duplicate IPs, and the Debian/Ubuntu package missing the conntrack dependency, result in failed cluster IP connectivity and kubelet preflight check failures. These network-related problems disrupt normal cluster operations.
  • [issues/135266, issues/135311]
  • API Server and Feature Gate Handling: Several issues address API server behavior and feature gate management, including improper handling of HTTP/2 GOAWAY events causing retry errors, the need to convert a feature gate to a configurable setting for better flexibility, and explicit error throwing for invalid feature gate checks. These improvements aim to enhance error handling and configuration flexibility.
  • [issues/135279, issues/135282, issues/135285]
  • Testing and Dependency Management: Challenges in testing infrastructure and dependency management are discussed, such as preventing test-only dependencies from leaking into downstream consumers and proposing channel-based alternatives to polling for cache synchronization to improve test efficiency and clarity. These issues focus on improving test reliability and dependency isolation.
  • [issues/135314, issues/135315]
  • Pod Startup and Diagnostic Information: Issues related to pod startup diagnostics include the lack of descriptive messages when pods are stuck in ContainerCreating state and requests to re-add image volume conformance tests removed due to outdated CI environments. These affect user ability to diagnose pod startup failures and maintain test coverage.
  • [issues/135269, issues/135299]
  • Configuration and Encoding Issues: Problems with configuration and encoding include the inability to specify FQDNs with trailing dots in HostAliases due to validation restrictions, missing charset encoding in pod log Content-Type headers causing display issues, and requests to make kernel memory overcommit settings configurable in kubelet. These issues impact usability and system behavior customization.
  • [issues/135273, issues/135292, issues/135294]

2.4 Closed Issues

This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.

Issues Closed This Week: 7

Summarized Issues:

  • CSI Volume and Persistent Volume Test Failures: Kubernetes end-to-end storage tests are failing on the gce-cos-master-serial job due to performance constraints not being met for volume deletion and provisioning at scale. Additionally, some pods fail because of client rate limiter errors, with these failures starting after a specific commit that affected CSI functionality.
  • issues/135239
  • Container Runtime and ImageVolume Feature Incompatibility: A pod fails to start because the container runtime (containerd 2.0.6) does not support the ImageVolume feature enabled by default, causing container creation errors due to a missing directory. This incompatibility prevents pods using image volumes with pull policy Always from running successfully.
  • issues/135247
  • kubectl Proxy Exec/Attach Request Failures: Exec or attach requests made through the kubectl proxy fail with a "unable to upgrade connection: Forbidden" error because the proxy rejects certain API paths by default. This behavior prevents successful execution of commands via the proxy, blocking expected kubectl operations.
  • issues/135252
  • Docker Client and Daemon API Version Mismatch: Build failures occur in sig-release-master-blocking jobs due to the Docker client version (1.52) being too new for the daemon's maximum supported API version (1.43). These failures persist until the underlying Docker version mismatch is resolved.
  • issues/135272
  • go-selinux Library Race Condition Vulnerability: A race condition vulnerability (CVE-2025-52881) exists in the go-selinux library used by Kubernetes, with an upstream fix merged but no new release cut yet. Kubernetes should upgrade to the fixed version once it becomes available to address this security issue.
  • issues/135274
  • Unsupported restartPolicyRules Causing Test Failures: Kubernetes tests fail when the kubelet restarts during cleanup if the Pod spec includes the unsupported restartPolicyRules action "RestartAllContainers" because the relevant feature gate is disabled in some test lanes. Enabling the feature gate resolves these test failures.
  • issues/135277
  • Spam Issue Report: A user reported an issue as spam and requested its closure due to its spam nature.
  • issues/135312

2.5 Issue Discussion Insights

This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.

Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.


III. Pull Requests

3.1 Open Pull Requests

This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.

Pull Requests Opened This Week: 41

Key Open Pull Requests

1. additioal change: This pull request introduces multiple changes to the Kubernetes scheduler framework, including adding cycle state to signature calls, updating node affinity sorting, disabling node resource signatures when extended dynamic resource allocation is enabled, adding and refining unit and integration tests, and making various code and documentation improvements based on review feedback.

  • URL: pull/135284
  • Merged: No
  • Associated Commits: 95a07, 78110, 5ea2d, 31d0f, ab3b2, 150ed, b057b, 2ddd0, 6537c, c8d3d, fde97, bfabd, cf1c1, 66aea, 61be2, 4e289, fa4c1, 994a2, 2d2bd, 73bb4, ce633, 95c75, 6c6fe, 7d1e3, 0fd66, 16df3, 38b3a, c1fb5, 5ef56

2. test: refine vgs resources clean up: This pull request refines the cleanup process of volume group snapshot (vgs) resources to ensure all created resources are gracefully deleted and resolves a previous issue where test namespaces remained stuck in the Terminating status due to statefulset persistent volume claims not being deleted in the correct order.

  • URL: pull/135250
  • Merged: No
  • Associated Commits: ac25e, 8bd22, b8f93

3. Automated cherry pick of #131843: ResourceQuota: partial fix "should verify ResourceQuota with terminating scopes through scope selectors" flake: This pull request is an automated cherry pick of a partial fix for a flake in the ResourceQuota end-to-end test that verifies ResourceQuota behavior with terminating scopes through scope selectors, including improvements to test reliability and debugging output.

  • URL: pull/135265
  • Merged: No
  • Associated Commits: 30dc3, 6718b, 59b4f

Other Open Pull Requests

  • API Server Aggregation and Connection Handling: Multiple pull requests improve the Kubernetes API server's handling of connections and HTTP/2 proxy requests. These include graceful handling of GOAWAY frames and fixing the GetBody function for proxied requests to address related issues, enhancing stability and correctness in the aggregation layer.
  • [pull/135290, pull/135287]
  • Contextual Logging Migration: The kubelet's container manager and the pkg/kubelet/allocation component are migrated to use contextual logging. This effort improves structured logging and log clarity across these components.
  • [pull/135249, pull/135251]
  • Goroutine Leak Fixes: Two pull requests address goroutine leaks by improving channel buffering and introducing stoppable contexts tied to shutdown signals. These fixes prevent blocking and ensure proper cancellation of operations during shutdown.
  • [pull/135241, pull/135278]
  • Etcd Client Library Update: The etcd client library and component are updated to version 3.6.6. This ensures Kubernetes uses the latest SDK improvements and incorporates recent etcd changes.
  • [pull/135270, pull/135271]
  • Feature Gate and Status Updates: The ResourceHealthStatus feature is promoted to Beta, and the reference to the "PodObservedGenerationTracking" feature gate is removed due to its GA status. These changes reflect the evolving maturity of Kubernetes features.
  • [pull/135255, pull/135256]
  • Security and RBAC Improvements: A security fix ensures that namespace deletion requires cluster-scoped permissions, preventing ServiceAccounts with only namespace-scoped roles from deleting their own namespaces. This strengthens RBAC authorization controls.
  • [pull/135310]
  • Bug Fixes for Resource Management: Fixes include closing underlying connections to prevent file descriptor leaks and correcting a race condition in pod readiness status updates. These changes improve resource management and maintain backward compatibility.
  • [pull/135257, pull/135261]
  • Kube API Linter (KAL) Update: The KAL is updated to the latest main branch to fix a stack overflow caused by a recursive API in the optionalfields rule. This resolves issues blocking other KAL enablement pull requests.
  • [pull/135259]
  • Kubectl Command Enhancements: Enhancements include adding a service account example to kubectl create clusterrolebinding help text and introducing a shorthand -r flag for kubectl explain. These improve usability and user guidance.
  • [pull/135258, pull/135283]
  • Audit Policy Wildcard Support: Support for wildcards in GroupResources within audit PolicyRules is added. This allows filtering and disabling audit logging for wildcard subresources, reducing noise in audit logs.
  • [pull/135262]
  • Conntrack Test Update: The end-to-end conntrack test is updated to use different service and target ports. This verifies that cleanup logic correctly identifies conntrack entries without confusion.
  • [pull/135264]
  • Event Timestamp Precision: Microseconds precision is added to the eventTime field in recorded events. This enhances the accuracy and usefulness of event timestamps for consumers.
  • [pull/135276]
  • PodDescriber Error Handling Standardization: The PodDescriber is standardized to always return "NotFound" errors for missing pods, removing special-case logic for deleted pods described via files. This aligns error handling with other resource describers and simplifies future refactoring.
  • [pull/135281]
  • Declarative Validation for Role Resource: Declarative validation is enabled for the Role resource using validation-gen. This implements a feature aligned with the enhancement proposal for declarative validation.
  • [pull/135286]
  • Minor Bug Fixes: Minor fixes include correcting the case of the title in the kubelet/server unit file for proper formatting and consistency.
  • [pull/135291]

3.2 Closed Pull Requests

This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.

Pull Requests Closed This Week: 11

Key Closed Pull Requests

1. Prototype: client-side preserve unknown fields: This pull request proposes a prototype implementation for client-side preservation of unknown JSON fields during decoding to enhance data handling in the Kubernetes project.

  • URL: pull/135237
  • Merged: No
  • Associated Commits: daf67, 80be6

2. Enable gRPC health checking and round-robin load balancing for etcd client: This pull request aims to enable gRPC health checking and round-robin load balancing for the etcd client to improve connection reliability by proactively detecting both soft and hard failures across all etcd endpoints.

  • URL: pull/135242
  • Merged: No
  • Associated Commits: f9e93, e6f13

3. Fix evented PLEG pod termination latency and reduce test timeout: This pull request aims to fix excessive pod termination latency caused by the evented PLEG's 5-second cache update period by reducing it to 2 seconds, thereby aligning termination times with expectations and allowing a corresponding reduction in test timeouts.

  • URL: pull/135240
  • Merged: No
  • Associated Commits: 01ba7

Other Closed Pull Requests

  • Container Runtime and Image Handling Fixes: Multiple pull requests address issues related to container runtime tests and image handling in Kubernetes. These include fixing the Container Runtime blackbox test to enable pulling images from private registries using secrets, removing an outdated image volume e2e test due to containerd version constraints, and testing CRI-O canary jobs with a specific crun version to ensure runtime reliability.
  • [pull/135244, pull/135254, pull/135253]
  • Volume and ResourceVersion Test Improvements: A pull request fixes volume performance tests by correcting the informer’s handling of ResourceVersion to prevent a LIST/WATCH gap that caused test timeouts despite successful PVC provisioning. This ensures more reliable volume-related testing outcomes.
  • [pull/135246]
  • Code Consistency and Linter Enhancements: One pull request updates about 100 comments to match unexported function names using an auto-fix feature of a new linter. This prevents linter warnings and improves code consistency across the codebase.
  • [pull/135263]
  • Dependency Update for SELinux: A pull request updates the dependency github.com/opencontainers/selinux to version v1.13.0 to fix a tracked issue, ensuring the project uses the latest stable SELinux integration.
  • [pull/135275]
  • Pod Security Admission and ProcMount Test Update: The procMount end-to-end test is updated to use the Pod Security Admission Restricted level instead of Baseline. This change reflects the intentional policy relaxation in Kubernetes 1.35+ allowing UnmaskedProcMount for pods with user namespaces and fixes a related ci-crio-userns-e2e-serial test failure.
  • [pull/135305]
  • PodDescriber Error Handling Standardization: A pull request standardizes the PodDescriber error handling to consistently return "NotFound" errors for missing pods, removing special-case logic for deleted pods described via files. This aligns its behavior with other resource describers and simplifies future refactoring.
  • [pull/135268]

3.3 Pull Request Discussion Insights

This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.

Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.


IV. Contributors

4.1 Contributors

Active Contributors:

We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.

If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.

Contributor Commits Pull Requests Issues Comments
pohly 53 4 15 59
neolit123 18 5 1 75
BenTheElder 36 2 1 52
bwsalmon 59 4 2 21
liggitt 18 1 0 59
macsko 40 4 3 28
michaelasp 13 6 0 50
yongruilin 58 1 1 9
HirazawaUi 15 2 3 35
tchap 41 4 0 9

Don't miss what's next. Subscribe to Weekly Project News:
Powered by Buttondown, the easiest way to start and grow your newsletter.