Weekly GitHub Report for Kubernetes: August 18, 2025 - August 25, 2025 (12:04:51)
Weekly GitHub Report for Kubernetes
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is v1.32.3
1.2 Version Information:
The Kubernetes 1.32 release, announced on March 11, 2025, introduces key updates and improvements detailed in the official CHANGELOG, with additional binary downloads available. This version continues the trend of enhancing cluster functionality and stability, as highlighted in the release notes.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
[FG: EnvFiles] Proposal for env file syntax extension and improvement: This issue proposes extending and improving the syntax of environment variable files used in Kubernetes, specifically to support multi-line values (such as PEM-formatted private keys) and environment variable interpolation within multi-container pods. The discussion centers on defining a clear, narrowly scoped specification that balances compatibility with existing standards like POSIX shell and dotenv formats, while avoiding complexity and ambiguity, ultimately leaning toward restricting syntax to double-quoted strings without interpolation or multi-line support initially.
- The comments reflect a detailed debate on which existing env file standards to reference, with concerns about under-specification and ambiguity in multi-line and interpolation features; many participants caution against adding interpolation due to complexity and potential pitfalls, favoring a minimal, POSIX-compatible syntax that disallows unquoted or loosely quoted values, multi-line strings, and interpolation, while emphasizing the importance of a tight, explicit spec to avoid user confusion and maintain compatibility with shell environments.
- Number of comments this week: 25
-
Regression in events POST latency at scale: This issue reports a regression in the latency of POST requests to the events resource at large scale (5,000 nodes) in Kubernetes version 1.34, where latency has increased beyond the expected threshold of 300ms. The discussion investigates potential causes including changes in etcd performance, test infrastructure, and recent commits, but no definitive root cause has been identified yet, with some correlation noted between increased etcd resource usage and latency.
- The comments include triaging the issue to relevant SIGs and confirming the regression affects the 1.34 release; participants analyze test details, rule out major code changes in the suspect commit range, and examine etcd latency and resource metrics. They consider external factors such as test infrastructure and COS image changes, note that the test still meets SLOs despite increased latency, and discuss possible links to recent PRs and etcd issues, concluding that further observation is needed before a fix is prioritized.
- Number of comments this week: 19
-
Add gofmt to hack/update-codegen.sh: This issue proposes adding the use of
gofmt
with the-s
(simplify) argument to thehack/update-codegen.sh
script to ensure all generated code is consistently simplified and formatted, addressing current cases where code complexity is manually added to generators. The discussion explores different implementation approaches, including runninggofmt
on generated files after code generation or integrating it directly within the code generation tool (gengo), while considering performance impacts, downstream usability, and options to disable the formatting step if needed.- The comments include a rough implementation sketch and performance measurements showing minimal impact when running
gofmt
after code generation. Participants debate whether to callgofmt
externally or integrate it inside gengo, weighing pros and cons such as fragility, downstream compatibility, and the need for an option to disable formatting. The conversation concludes with agreement on addinggofmt
support, suggestions for fallback behavior ifgofmt
is missing, and references to related pull requests implementing these changes. - Number of comments this week: 18
- The comments include a rough implementation sketch and performance measurements showing minimal impact when running
-
In-Place Updates for StatefulSet: This issue addresses the problem where updating only metadata such as labels or annotations in a StatefulSet's pod template causes the StatefulSet controller to unnecessarily recreate all pods, leading to avoidable service disruptions. The user requests an enhancement to support in-place updates for metadata changes without pod recreation, aligning StatefulSet behavior with direct pod modifications and reducing operational overhead and downtime.
- The comments explain that the current StatefulSet implementation triggers rolling updates based on template hash changes, which requires pod recreation. Suggestions include adopting in-place update mechanisms similar to OpenKruise’s Advanced StatefulSet to allow metadata and container image updates without restarts, while acknowledging the complexity and risks involved in mutating pods dynamically. An opt-in approach is proposed to balance safety and flexibility, and the discussion highlights the inconsistency between direct pod annotation changes and StatefulSet template updates.
- Number of comments this week: 6
-
will kublet re-execute the startup probes of the pods on the node when restarts?: This issue concerns whether the kubelet re-executes the startup probes of pods on a node after the kubelet is restarted, which appears to cause pods to enter a ContainerNotReady state despite startup probes being intended as a one-time check. The user reports this behavior led to an online incident and seeks confirmation if this is expected behavior and any best practices to handle it.
- The comments confirm this is a recognized bug linked to a broader enhancement proposal, with a triage marking it as important-longterm. A reproduction example is provided to demonstrate the issue, and it is noted that a fix will require implementing a Kubernetes Enhancement Proposal (KEP) as part of a larger task.
- Number of comments this week: 5
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
- apimachinery's unstructured converter panics if the destination struct contains private fields: This issue describes a panic occurring in the apimachinery's DefaultUnstructuredConverter when it attempts to convert an unstructured object into a destination struct that contains private (non-exported) fields. The reporter expects the converter to safely ignore these private fields instead of panicking, as the current behavior causes failures especially when working with protobuf-generated gRPC structs that include private fields for internal state.
- Integration tests for kubelet image credential provider: This issue proposes adding integration tests for the kubelet image credential provider, similar to the existing tests for client-go credential plugins. It suggests that since there are already integration tests for pod certificate functionality, implementing tests for the kubelet credential plugins would be a logical and beneficial extension.
- conversion-gen generates code that leads to panics when fields are accessed after conversion: This issue describes a bug in the conversion-gen tool where it generates incorrect conversion code for structs that have changed field types between API versions, specifically causing unsafe pointer conversions instead of proper recursive conversion calls. As a result, accessing certain fields like
ExclusiveMaximum
after conversion leads to runtime panics, indicating that the generated code does not safely handle pointer and value conversions as expected. - Failure cluster [ff7a6495...] TestProgressNotify fails when etcd in k/k upgraded to 3.6.2: This issue describes a failure in the TestProgressNotify test that occurs when the etcd component in the Kubernetes project is upgraded to version 3.6.2. The test times out after 30 seconds waiting on a result channel, with multiple errors indicating that the embedded etcd server fails to set up serving due to closed network connections and server shutdowns.
- PodLevelResources + DRA discovery: This issue addresses the integration and cooperative behavior between the PodLevelResources API, which is moving toward beta status in version 1.34, and the Device Resource Allocation (DRA) feature that has reached general availability. It focuses on clarifying how to handle scenarios where a pod specification includes both pod-level resource configurations and container-specific resource claims, aiming to update the relevant enhancement proposal and prioritize implementation work for a seamless transition of PodLevelResources to general availability.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 30
Summarized Issues:
- LoadBalancer healthCheckNodePort issues: This issue proposes adding an option to disable automatic allocation of healthCheckNodePort for LoadBalancer services with externalTrafficPolicy set to Local in on-premises Kubernetes environments. This aims to prevent unnecessary open ports, security risks, and operational confusion when external load balancers handle health checks independently.
- issues/133583
- Dynamic Resources Allocator error handling and scoring: Problems exist in the Dynamic Resources Allocator's
bindClaim
function due to insufficient error tracking, requiring explicit boolean flags and new unit tests to simulate pod status patching errors. Additionally, the lack of a scorer in the dynamicresources scheduler plugin is addressed by proposing a basic scoring approach to improve binpacking during the Beta phase of extended resources. - issues/133590, issues/133669
- StatefulSet rollout and pod template update improvements: A new
Progressing
condition is proposed for StatefulSet status to provide clearer rollout insights similar to Deployments. Also, modifying only metadata in a StatefulSet's pod template currently triggers unnecessary pod recreation, and enabling in-place updates for such changes is proposed to reduce service disruption. - issues/133592, issues/133637
- Volume management and scheduling failures: The Kubelet Volume Manager can incorrectly mark volumes as unmounted after transient API server failures, causing failed unpublish operations and dangling mounts. Additionally, a flaky volume scheduling test fails due to pod scheduling timeouts during volume binding, blocking release progress.
- issues/133597, issues/133611
- Performance regression in events resource latency: A performance regression has caused doubled latency for POST requests to the Kubernetes events resource at large scale (5,000 nodes), potentially linked to etcd or infrastructure changes, and is under investigation.
- issues/133600
- Scheduler resource claim allocation bug: The scheduler incorrectly schedules multiple pods on the same node despite resource claim limits, indicating a problem with the assume cache update mechanism for node-local resource allocation.
- issues/133602
- Environment variable file syntax enhancements: Proposals include extending Kubernetes-style environment variable files to support lossless multi-line values with double quotes and optional environment variable interpolation. This addresses current limitations such as handling PEM-formatted keys and per-container variable substitution in multi-container pods.
- issues/133606
- InPlacePodVerticalScaling conformance testing gaps: The pods/resize API endpoints introduced in release 1.32.0 lack conformance testing, with three key endpoints remaining untested and needing fixes before the feature reaches general availability.
- issues/133607
- Code generation formatting improvements: Enhancing the
hack/update-codegen.sh
script to rungofmt -s
on all generated Go code outputs is proposed to simplify formatting and ensure consistent, simplified generated code by removing added complexity in code generators. - issues/133609
- Conformance test suite coverage for VAC API: The VAC API test found in the Kubernetes e2e storage volumeattributesclass.go file was missed during GA graduation and needs to be added to the conformance test suite.
- issues/133610
- CI test flakes due to environment issues: The ci-dra-integration job's DRA upgrade/downgrade tests intermittently fail because the local-up-cluster.sh script cannot locate the GNU sed binary, causing test flakes in the Kubernetes project.
- issues/133626
- Pod status resource defaulting investigation: There is an investigation into whether pod status defaulting should change from allocated resources to actuated resources, focusing on aligning memory request defaults and exploring prepopulating actuatedResources with allocatedResources for new pods.
- issues/133629
- ValidatingAdmissionPolicy cross-resource validation enhancement: Enhancing ValidatingAdmissionPolicy to support CEL expressions referencing other on-cluster resources during admission is proposed to enable richer, declarative policy enforcement without custom admission webhooks.
- issues/133631
- Windows kube-proxy TCP idle timeout limitation: Windows kube-proxy lacks a configurable established TCP idle timeout similar to Linux, causing idle TCP connections from Windows pods to Kubernetes services to be dropped after about four minutes.
- issues/133639
- Security risk from overly permissive volume permissions: Volumes created from ConfigMaps and Secrets are assigned overly permissive 777 permissions by default, allowing any system user to read and write, whereas more restrictive permissions like 644 or 600 are expected.
- issues/133641
- Aggregated API server health check stream errors: Periodic kube-apiserver health checks cause "http2: stream closed" errors in calico-apiserver logs because the health check client closes the HTTP stream immediately after response headers, preventing completion of the JSON response.
- issues/133644
- Taint eviction controller sporadic failure: The taint eviction controller sporadically fails to evict certain NotReady nodes, especially controlplane nodes, after five minutes despite identical cluster setups, causing nodes not to drain as expected during maintenance.
- issues/133650
- Apiserver panic due to incompatible CRD after upgrade: Upgrading to Kubernetes 1.33 causes a panic in the apiserver due to an old CRD manifest incompatible with the new version, leading to a nil pointer dereference when the CRD schema is missing.
- issues/133651
- Pod startup timeout with extended resource requests: Pods requesting extended resources fail to reach the Running phase and instead time out, indicating potential issues with resource scheduling or networking in the test environment.
- issues/133653
- Sample-controller code generation update request: The sample-controller should be updated to use register-gen for generating zz_generated.register.go following best practices exemplified by deepcopy-gen.
- issues/133656
- GracefulNodeShutdown feature test failures: The GracefulNodeShutdown feature consistently fails to gracefully shut down pods with various grace periods due to invalid pod status updates, causing failures in serial node tests across container runtimes.
- issues/133657
- POD Resources API test panics on crio serial lanes: All POD Resources API tests fail on crio serial lanes due to a panic caused by assignment to an entry in a nil map during test setup, resulting in test crashes.
- issues/133658
- AWS scale test job failures blocking release: AWS load test jobs for the
ci-kubernetes-e2e-kops-aws-scale-amazonvpc-using-cl2
have been failing since August 19, 2025, due to worker nodes not joining the cluster and critical pods being unready, blocking the Kubernetes 1.34 release. - issues/133661
- CronJob lastFailureTime status field proposal: Adding a
lastFailureTime
field to CronJob status is proposed to explicitly track the time of the last failed job execution, improving the ability to determine recent job success or failure without indirect comparisons. - issues/133662
- Aspire CLI Helm deployment service disabling difficulty: Users face difficulty temporarily disabling multiple services generated by Aspire CLI in Helm deployments, seeking a way to deploy only one service for testing without errors caused by disabling others in values.yaml.
- issues/133666
- Global cache for extended resource device classes: A global cache is proposed to map extended resources to device classes to address limitations where certain noderesources plugin code paths cannot access preFilter stage calculations before scheduling cycles start.
- issues/133670
- Resource quotas for DRA extended resources discussion: Discussion and implementation of resource quotas for DRA extended resources in their Beta phase are ongoing as outlined in the related Kubernetes Enhancement Proposal (KEP).
- issues/133671
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 14
Summarized Issues:
- Test Failures Due to Missing Kubernetes Release Files: Several issues report failing tests and job preparations caused by attempts to download non-existent Kubernetes release files, specifically stable-1.34.txt. These failures affect the DRA upgrade/downgrade process and Dynamic Resource Allocation beta feature tests, leading to persistent errors since August 7, 2025.
- [issues/133506, issues/133507]
- Resource Allocation and Race Conditions: There are concurrency issues in the dynamic resource allocation allocator, where shared state is unsafely accessed by multiple goroutines, causing data races. Additionally, flaky scheduler tests fail intermittently due to race conditions involving cache synchronization, resulting in missing resource claims and errors.
- [issues/133586, issues/133594]
- Build and Compilation Errors: Build failures occurred due to an outdated Go compiler version in the golang-tip branch, and compile errors arose from incompatible versions of the
structured-merge-diff
dependency betweenapimachinery
andkube-openapi
. These issues prevented successful builds and required fixes to resolve version mismatches. - [issues/133581, issues/133584]
- Networking and IPv6 Test Failures: A Kubernetes network test fails when handling large UDP requests over IPv6, likely caused by a kernel regression affecting UDPv6 segmentation and packet processing. This issue impacts specific Kubernetes test suites related to networking and IPv6 functionality.
- [issues/133361]
- Pod Scheduling and Topology Spread Enhancements: There is a proposal to improve Topology Spread scheduling behavior to allow pods to be scheduled on nodes even when the current skew exceeds the maximum, provided the scheduling does not worsen the skew. This aims to better handle pod distribution during and after zone failures and reduce unnecessary pod pending states.
- [issues/133496]
- Memory Requests and cgroup Support in Vertical Scaling: The InPlacePodVerticalScaling feature currently lacks support for using memory requests to set cgroup values, highlighting a need to track and implement changes to cgroups when memory requests are updated.
- [issues/133539]
- Windows Platform Test Timeouts: The Kubernetes Windows platform end-to-end test ci-kubernetes-e2e-capz-master-windows.Overall has been failing due to timeouts while waiting for container images to be pulled, causing persistent test failures since August 14, 2025.
- [issues/133563]
- Missing Kubernetes Release Version: The stable Kubernetes release version 1.32.0 is missing from the official storage location, preventing users from accessing this specific release.
- [issues/133593]
- ServiceCIDR API and Conformance Conflicts: The ServiceCIDR API is not supported in all Kubernetes clusters and can be blocked by Validating Admission Policies, yet conformance tests require it to be modifiable. This conflict leads to enforcement issues and discussions about excluding ServiceCIDR from conformance requirements.
- [issues/133613]
- GPU Resource Specification and Initialization Issues: Intermittent GPU access failures occur in pods on NVIDIA GPU-enabled nodes when the
resources.limits.nvidia.com/gpu
field is omitted despite setting theNVIDIA_VISIBLE_DEVICES
environment variable. Clarification is sought on whether explicit GPU resource requests are mandatory for reliable GPU initialization with the NVIDIA device plugin. - [issues/133618]
- Security Concerns with NFS Volume Permissions: Volumes created via StorageClass managed by the nfs-client-provisioner have overly permissive 777 permissions by default, posing security risks by allowing full access to any user. It is suggested that these permissions should be more restrictive to enhance security.
- [issues/133643]
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 44
Key Open Pull Requests
1. feat: add support for webassembly for kubectl: This pull request adds preliminary support for compiling the kubectl command-line tool to WebAssembly, enabling it to run directly in web browsers without requiring an external shell or virtual machine, thereby facilitating new use cases such as browser-based Kubernetes management and training environments.
- URL: pull/133638
- Merged: No
2. Enforce API conventions around Conditions fields via Kube API Linter: This pull request enforces API conventions for the Conditions fields in Kubernetes API types by enabling and configuring the conditions rule in the Kube API Linter to verify that these fields are correctly defined as slices of metav1.Condition with proper annotations, tags, ordering, and patch strategies, while also updating the linter to the latest development version and managing exceptions for existing deviations.
- URL: pull/133605
- Merged: No
3. Use indexer to acclerate volume limit plugin: This pull request improves the performance of the NodeVolumeLimit scheduling plugin by replacing the original volumeAttachment listing implementation with an indexer, resulting in a significant reduction in scheduling time from 1010ms to 14ms when handling large numbers of volumeAttachments.
- URL: pull/133622
- Merged: No
Other Open Pull Requests
- Documentation improvements: Multiple pull requests enhance documentation by adding top-level user and contributor-focused files such as doc.go, ARCHITECTURE.md, and example_test.go for client-go and apiserver components. These changes address long-standing gaps and provide clearer guidance and examples for users and contributors.
- Test stability and flakiness fixes: Several pull requests improve test reliability by modifying update operations to use retry mechanisms, delaying watcher closures, and adding pauses to reduce flakiness. These changes help prevent errors related to concurrent modifications and ensure proper event queue filling during tests.
- API and feature enhancements: Pull requests introduce new API features such as the
status.lastFailureTime
field for CronJobs and update resource kind casing for consistency. These updates improve API expressiveness and user experience in command-line help texts.
- Debugging and logging improvements: Some pull requests add enhanced debugging output around test execution and improve event message clarity by including involved object field paths. These changes facilitate troubleshooting and provide better context for multi-container pod events.
- Storage and volume management updates: Multiple pull requests address storage-related issues by marking certain CSI driver errors as transient, adding discovery checks before storage version migration, and removing deprecated rbd storage components from tests. These changes improve stability and align behavior across storage plugins.
- Code refactoring and optimization: Pull requests optimize set debugging by using efficient methods like maps.Clone() and DifferenceSeq(), and improve condition management by refactoring to use slices for in-place removal. These changes reduce memory allocations and improve code readability without altering APIs.
- Scheduler and concurrency fixes: A pull request addresses a data race warning in the scheduler_perf integration test by explicitly shutting down the background klog flush daemon during cleanup. This ensures thread-safe logging operations and prevents race conditions.
- Metrics and monitoring fixes: One pull request fixes a bug in the
StartedPodsErrorsTotal
metric by counting pods failing during both creation and startup, aligning the metric with its intended meaning and related metrics.
- Command-line help text updates: A pull request clarifies the
--selector
flag help text in thekubectl expose
command to indicate selector inference applies to any resource, not just replication controllers or replica sets, and removes a stray bracket for readability.
- API gating and migration safeguards: A pull request requires the Storage Version Migrator to be gated behind RealFIFO to prevent race conditions caused by out-of-order queuing, and another adds a discovery check to verify object update capability before migration. These changes prevent migration failures and improve reliability.
- Kubelet dependency decoupling: One pull request introduces a minimal local interface
VolumeWaiter
in the kubelet nodeshutdown package to decouple node shutdown logic from the volume manager, maintaining runtime behavior while removing direct dependencies.
- Resource claim and cache updates: A pull request adds a resourceClaimModified flag to the bindClaim function to determine whether the assume cache should be updated, addressing a specific issue.
- Integration test additions: A pull request adds more integration test cases for the StatefulSet registry covering multiple scenarios involving the
maxUnavailable
parameter.
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 12
Key Closed Pull Requests
1. test(dra): deflake ci-dra-integration by increasing wait timeouts: This pull request aims to fix flaky Dynamic Resource Allocation (DRA) integration tests by increasing several conservative wait timeouts in the test code to better accommodate delays caused by resource allocation under load, scheduler retries, network latency, and resource cleanup, thereby improving test reliability without affecting production code or significantly extending test runtime.
- URL: pull/133652
- Merged: No
2. Dev1: This pull request titled "Dev1" includes updates to the README.md file and the addition of a new file named mohit.txt, reflecting documentation changes and new content contributions.
- URL: pull/133649
- Merged: No
3. comment out the device plugin test case: supports extended resources …: This pull request comments out the device plugin test case that supports extended resources together with ResourceClaim due to conflicts with another test case involving running pods with extended resources on both device plugin and dra nodes.
- URL: pull/133286
- Merged: 2025-08-19T07:43:35Z
- Associated Commits: 51d4e
Other Closed Pull Requests
- Test Flake Fixes and Stability Improvements: Multiple pull requests address flakes and race conditions in various tests by adding synchronization mechanisms and wait conditions. These changes improve test reliability by ensuring caches are fully synced and statistics converge before proceeding, preventing intermittent failures caused by timing issues and concurrency.
- [pull/133562, pull/133595]
- Dynamic Resource Allocation (DRA) Enhancements: Updates to the DRA component include fixing a data race in the allocator and improving the upgrade/downgrade end-to-end test to handle the absence of stable release URLs. These fixes prevent broken device allocation and test failures related to release timing.
- [pull/133523, pull/133587]
- Scheduler Cache Concurrency Fix: A data race in the scheduler cache's
finishBinding
method is fixed by changing synchronization from a read lock to a write lock. This prevents unsafe concurrent modifications of pod state when multiple goroutines callfinishBinding
simultaneously. - [pull/133635]
- Storage Counting Regression Fix: A regression in Kubernetes 1.34 release candidates is fixed by correcting the storage counting mechanism to count only objects for a specific resource. This reduces load on etcd and ensures accurate metrics for resources with watch cache disabled.
- [pull/133604]
- ServiceCIDR API Conformance Test Update: PATCH and UPDATE operations are removed from the ServiceCIDR API conformance test because they were already marked ineligible. This ensures only eligible endpoints are tested while ServiceCIDR changes continue to be validated through non-conformance tests.
- [pull/133625]
- Documentation and Repository Content Additions: Separate pull requests propose adding a simple text file and updating the README.md file with documentation changes. These contributions enhance project documentation and repository content.
- [pull/133645, pull/133640]
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
Contributor | Commits | Pull Requests | Issues | Comments |
---|---|---|---|---|
BenTheElder | 30 | 5 | 5 | 126 |
liggitt | 10 | 4 | 0 | 33 |
dims | 8 | 2 | 4 | 32 |
ffromani | 26 | 3 | 0 | 14 |
HirazawaUi | 6 | 3 | 4 | 29 |
bart0sh | 8 | 4 | 3 | 18 |
pohly | 5 | 4 | 8 | 13 |
yliaog | 7 | 3 | 4 | 16 |
richabanker | 8 | 6 | 1 | 12 |
macsko | 8 | 2 | 0 | 16 |