Weekly GitHub Report for Kubernetes: May 05, 2025 - May 12, 2025 (12:01:38)
Weekly GitHub Report for Kubernetes
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is v1.32.3
1.2 Version Information:
The version release on March 11, 2025, introduces key updates to Kubernetes, as detailed in the changelog, with additional binary downloads available. Notable highlights or trends can be found in the linked resources, emphasizing the ongoing enhancements and improvements in the Kubernetes ecosystem.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
In large model inference scenarios, Management Plane Components are Killed Due to OOM: This issue addresses the problem of management plane components being killed due to out-of-memory (OOM) errors in large model inference scenarios, where frequent job creation and deletion lead to increased memory usage in etcd and kube-apiserver. The current solution involves expanding the management plane specifications, but the expectation is for the management plane components to remain stable without restarts.
- The comments discuss the need for more information, such as kube-apiserver profiling and Kubernetes version details, to address the issue. Suggestions include tuning the API server for heavy usage and exploring ongoing performance optimization projects. There is a mention of similar cases with high Pod churn affecting memory usage, and a request for profiling data and version information to better understand the problem.
- Number of comments this week: 11
-
Add Fallback to Static Version File in Shallow Clones.: This issue addresses the problem of versioning in environments where the Kubernetes source repository is cloned with the
--depth=1
option, which results in the absence of full Git history and tags, causing thegit describe
command to fail and default to an uninformative version. The proposed enhancement suggests introducing a static version fallback file to ensure consistent and predictable versioning whengit describe
cannot determine the version, thereby supporting reproducible builds and minimizing the impact on existing logic.- The comments discuss the nature of the issue as a feature request, with a pull request already raised to address it. There is a suggestion to use partial clones instead of shallow clones, as shallow clones can break functionality and are not significantly faster. Some comments express concerns about using a static version file, noting it may not accurately reflect the repository's version, and suggest alternative solutions like using git archives or setting environment variables for reproducibility.
- Number of comments this week: 6
-
Does the cpu static policy option of PreferAlignByUncoreCache only support even CPU counts?: This issue discusses a problem with the Kubernetes CPU Manager's static policy option, "PreferAlignByUncoreCache," which seems to only support even CPU counts, causing difficulties when assigning an odd number of CPUs. The user reports that when attempting to allocate 7 CPUs, the policy fails to align the CPUs by uncore cache, and instead, a fallback strategy is used that does not apply the intended alignment.
- The comments explore whether the failure is related to SMTAlignment errors and discuss the implications of enabling the "full-pcpus-only" policy option. It is clarified that the issue is not an SMTAlignment error, but rather a fallback allocation occurs without proper uncore cache alignment. A contributor notes that uncore cache alignment was designed with full-pcpus-only in mind to mitigate noisy neighbor issues, and they plan to investigate a fix and discuss it with the sig-node group.
- Number of comments this week: 5
-
[sig-scheduling] SchedulerPreemption [Serial] validates various priority Pods preempt expectedly with the async preemption: test assumption about finalizers does not reflect KS synthetic deletes: This issue addresses a problem with the Kubernetes scheduler preemption test, where low-priority pods with finalizers are expected to remain undeleted until all high-priority pods have their
.Status.NominatedNodeName
field set. However, due to a field selector used by the scheduler, a synthetic delete event is triggered by the kube-apiserver when a pod succeeds, leading to the premature scheduling of high-priority pods even though the low-priority pods are still present due to their finalizers.- The comments discuss the occurrence of synthetic delete events in the logs, indicating that low-priority pods are not being deleted as expected due to finalizers, yet the scheduler still receives delete events. This behavior is observed in both Kind and OpenShift environments, with logs provided to illustrate the issue. The discussion includes a call for attention from the Kubernetes scheduling leads to address the problem.
- Number of comments this week: 5
-
[FG:InPlacePodVerticalScaling] Metrics: This issue is about proposing and implementing additional metrics for the Kubernetes Enhancement Proposal (KEP) 1287, specifically focusing on tracking resize requests at both the pod and container levels, including metrics for latency, infeasibility reasons, and error rates. The goal is to enhance the observability of the resizing process by capturing detailed metrics that are not currently available through the existing API server metrics.
- The comments discuss the categorization of the issue as a feature and its importance for long-term priorities, with a focus on node instrumentation. There is a mention of existing metrics that partially cover error rates, but it is noted that some errors, particularly those related to pod sandbox resizes, are not captured.
- Number of comments this week: 4
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
- apimachinery resource.Quantity primitive values should be public for recursive hashing: This issue addresses the need for the primitive values within the
apimachinery resource.Quantity
struct to be made public to facilitate recursive hashing by libraries such ashashstructure
, which is currently hindered by these values being private. The lack of public access to these values complicates the detection of changes in Custom Resource Definitions (CRDs) used in projects likekubernetes-sigs/karpenter
, impacting the ability to efficiently manage resource allocation and detect specification drift. - APF borrowing by exempt does not match KEP: This issue highlights a discrepancy between the Kubernetes Enhancement Proposal (KEP) and its implementation regarding how the exempt priority level borrows concurrency limits from other levels. Specifically, the KEP outlines a distinct formula for calculating the minimum concurrency limit for exempt levels, which is not reflected in the current implementation, leading to the exempt priority level having a minimum concurrency limit of zero in the default configuration.
- apimachinery's unstructured converter panics if the destination struct contains private fields: This issue describes a problem with the
DefaultUnstructuredConverter
in the Kubernetesapimachinery
package, where it panics when attempting to convert a destination struct that contains private fields. The panic occurs because the converter tries to set values on these non-exported fields, which is not allowed in Go, and the user expects the converter to ignore such private fields to prevent the panic. - Kubernetes component cpu and memory allocation: This issue is about seeking guidance on setting appropriate CPU and memory requests and limits for key Kubernetes components, such as kube-apiserver, kube-controller-manager, kube-scheduler, and kube-proxy, in a cluster with fewer than 50 nodes. The user is requesting suggestions for these configurations, noting that the cluster was installed using kubeadm. Since there were fewer than 5 open issues, all of the open issues have been listed above.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 20
Summarized Issues:
- Kubernetes Pod Container Status Issues: The "Pods Extended Pod Container Status" test in Kubernetes is incorrectly reporting success for a pending container due to an unexpected exit code 2. This issue has been observed in multiple CI jobs, indicating a recurring problem with the test's accuracy.
- Kubernetes CPU Manager Allocation Problems: The CPU Manager's static policy option "PreferAlignByUncoreCache" fails to allocate an odd number of CPUs when less than a full uncore cache is available. This limitation or bug results in misalignment, particularly when attempting to assign 7 CPUs with SMT enabled.
- Kubernetes Network Connectivity Issues: The kube-controller-manager is unable to connect to the API server due to network abnormalities, yet master node selection does not indicate failure. Despite the watch request being disconnected, the leader election configuration is set with specific lease duration and renewal deadlines.
- Kubernetes Versioning Inconsistencies: In environments using shallow Git clones, Kubernetes versioning inconsistencies arise, prompting a proposal for a static version fallback file. This would ensure consistent and predictable versioning when the
git describe
command fails.
- Kubernetes Volume Mapping Issues:
volumeDevices
mappings are not created when the host system's/dev
directory is mounted into a container. This prevents the expected mapping of a block device to a known name within the container, despite other mounted volumes being present.
- Kubernetes Static Content Serving Problems: A user encounters difficulties serving static content through custom paths in a sample API server application. Despite referencing similar implementations and documentation, errors occur when trying to retrieve these paths.
- Kubernetes StatefulSet PVC Retention Issues: The StatefulSetPersistentVolumeClaimPolicy is not retaining PVCs as expected after adopting a pod during scaling. This results in a "pods 'ss-2' not found" error, identified as a failing test and flake under the apps SIG.
- Kubernetes Code Generator Dependency Checks: A lint check is proposed to ensure code generators do not introduce testing dependencies. This involves modifying the script
hack/verify-testing-import.sh
to support this functionality.
- Kubernetes Pod Vertical Scaling Metrics: Additional metrics for KEP 1287 are proposed to track resize requests at pod and container levels. These metrics aim to measure latency, error rates, and identify reasons for infeasible resizes, enhancing monitoring and performance analysis.
- Kubernetes Kubelet Configuration Issues: Disabling the
localStorageCapacityIsolation
setting in the kubelet configuration causes the eviction manager's synchronization process to fail. This failure is due to the inability to capture necessary filesystem information, crucial for components beyond local storage isolation.
- Kubernetes Component Version Monitoring: A method to record and monitor
BinaryVersion
,EmulationVersion
, andMinCompatibilityVersion
for components is proposed. This involves implementing a newAddMetrics
method and integrating it into the existing Kubernetes metrics system.
- Kubernetes Scheduler Backoff Duration Issues: The
WithPodMaxBackoffDuration(0)
setting in the Kubernetes scheduler is not respected, causing pods to experience a backoff penalty of approximately 0.999 seconds. This behavior is contrary to user expectations of disabling backoff completely.
- Kubernetes Kubelet SELinux Denials: Creating a large number of files in an
emptyDir
volume causes the kubelet service to fail to restart due to SELinux denials. The failure threshold depends on the node's hardware speed, highlighting the need for the kubelet to handle such scenarios without being hindered by file count or SELinux policies.
- Kubernetes Management Plane OOM Errors: Management plane components are terminated due to OOM errors in large model inference scenarios. Frequent job creation and deletion increase memory usage in etcd and kube-apiserver, necessitating expanded management plane specifications to prevent component restarts.
- Kubernetes Container Readiness Issues: A container's readiness condition remains true and continues to receive traffic even after its liveness probe fails. It is suggested that the container's ready condition should be set to false immediately upon liveness probe failure to prevent traffic from being forwarded to an abnormal container.
- Kubernetes LoadBalancer Test Failures: A failing end-to-end test related to LoadBalancers with ExternalTrafficPolicy set to Local encounters a runtime panic error due to an index out of range. The test is expected to target only nodes with endpoints, as detailed in the stack trace from the test logs.
- Kubernetes Scheduler Preemption Test Issues: A test in the scheduler preemption process fails due to low-priority pods with finalizers being prematurely deleted. This causes high-priority pods to be scheduled before finalizers are removed, violating test assumptions and leading to failures.
- Kubernetes Device Plugin Test Flakes: A flaking test within the sig-node group involves device plugin tests intermittently failing due to expected devices not being available on the local node. This affects multiple CI jobs and has been occurring for a long time.
- Kubernetes Dependency Management: Improved visibility into upcoming dependency issues is proposed through a forward-looking job that updates and analyzes dependencies. This aims to identify potential problems such as breaking changes, unwanted links, or increased dependencies, allowing proactive management.
- Kubernetes Connectivity Pod Lifecycle Test Failures: The "Connectivity Pod Lifecycle" test fails to maintain zero downtime during a Blue-Green deployment due to a connectivity error. The system incorrectly connects to the "blue-pod" instead of the expected "green-pod," causing the test to fail.
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 16
Summarized Issues:
- Memory Management in Kubernetes: The Kubernetes project is addressing issues related to memory management, including the need for in-place memory resizing for QOS pods with a static memory policy. Additionally, there are challenges with the kubelet not refreshing immutable secrets after they are deleted and recreated, leading to pods accessing outdated values.
- Testing and Test Failures: Several issues in the Kubernetes project involve test failures and enhancements, such as the need to improve volume expansion tests and address panic errors in unit tests. There are also specific test failures like "TestWatchNotHangingOnStartupFailure" due to storage reinitialization problems.
- Dependency and Configuration Issues: The Kubernetes project is dealing with dependency updates, such as upgrading
go-jose
to a supported version, and configuration challenges like theregisterWithTaints
failing to maintain taints during TLS bootstrapping. Additionally, there are issues withkubectl apply
failing with unhelpful error messages when labels are set toNull
.
- Pod and DaemonSet Behavior: Issues have been reported regarding pod status reporting, such as pods terminated due to OOM being incorrectly marked as "Succeeded." There are also problems with DaemonSets where old and new pods coexist temporarily on the same node, contradicting expected behavior.
- Kubelet and Service Crashes: The Kubernetes project has encountered issues with the Kubelet service crashing due to a panic caused by an assignment to a nil map. This issue has been resolved in a later version, but it highlights the need for stability in service operations.
- Onboarding and Infrastructure: The onboarding of an s390x build cluster to the Kubernetes community's Prow infrastructure involves several tasks, including setting up a Kubernetes cluster on s390x and integrating with IBM's Secrets Manager and Terraform for infrastructure provisioning.
- Decoder and Panic Errors: A problem with the
YAMLOrJSONDecoder
in Kubernetes can cause a panic due to a negative slice bounds error, highlighting the need for robust error handling in data processing components.
- Probe Configuration: A user seeks guidance on configuring both HTTP and gRPC probes for a Kubernetes Pod, questioning the feasibility of implementing a gRPC probe alongside existing HTTP liveness and readiness probes.
- Pod Deletion and Finalizer Issues: A Kubernetes pod cannot be deleted due to an invalid
imagePullSecrets
configuration and a finalizer, which was resolved by disabling the Kyverno policy engine interfering with the finalizer removal.
- Miscellaneous: A closed GitHub ticket titled "check" lacks a detailed description or comments, indicating an issue with documentation or communication in the project.
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 41
Key Open Pull Requests
1. Migrate to declarative validation: ReplicationController/scale spec.replicas field: This pull request migrates the validation of the replicas
field in the ReplicationController's /scale
subresource to a declarative validation approach, enabling more consistent error reporting and testing across different group versions, contingent on the activation of specific feature gates.
- URL: pull/131664
- Merged: No
2. update BoundedFrequencyRunner for kube-proxy: This pull request updates the BoundedFrequencyRunner for kube-proxy by moving it to the pkg/proxy
directory, rewriting parts of it to improve code clarity and functionality, addressing potential race conditions, and refining unit tests, while also incorporating structured logging and removing unnecessary checks, as part of a broader effort to make the codebase easier to review and maintain.
- URL: pull/131615
- Merged: No
3. [FG:InPlacePodVerticalScaling] Move resize allocation logic out of the sync loop: This pull request aims to enhance the Kubernetes system by moving the in-place pod resize allocation logic out of the sync loop, thereby improving the handling of pod resize allocations through the allocation manager, introducing helper methods for managing resize feasibility and deferral, updating the kubelet to avoid resize attempts within the sync loop, and adding an end-to-end test to ensure deferred resizes are retried successfully, all while setting the groundwork for future prioritized resizes.
- URL: pull/131612
- Merged: No
Other Open Pull Requests
- Cleanup Tasks in Kubernetes: This topic includes various cleanup tasks aimed at improving the Kubernetes project. These tasks involve implementing checks to prevent issues with duplicate claim names, migrating functions for better code organization, and testing canary jobs to ensure stability after previous changes.
- Dynamic Resource Allocation (DRA) Enhancements: Several pull requests focus on enhancing the DRA in Kubernetes. These include revising test labeling for easier identification, introducing a new metric to monitor DRA driver usage, and optimizing counter management in the DRA allocator to reduce computational overhead.
- Error Message Improvements: Enhancements to error messages in Kubernetes aim to improve user understanding and troubleshooting. This includes providing clearer error descriptions when creating pod sandboxes with user namespaces on older runtimes.
- Bug Fixes in Kubernetes: Various bug fixes address issues in the Kubernetes project. These fixes include updating templates in the
applyconfig-gen
tool, ensuring disk usage metrics are captured, and removing unnecessary downloads in test images.
- Feature Enhancements and Promotions: New features and promotions in Kubernetes include enabling the "kuberc" feature by default, introducing custom labels for leases, and allowing users to specify directories for image credential providers.
- Automated Cherry-Picks for Stability: Automated cherry-picks ensure stability across different branches of Kubernetes. These include updates to the Windows KubeProxy component and disabling size checking during filesystem resize operations.
- Testing and Documentation Updates: Updates to testing and documentation aim to improve the Kubernetes project. These include adding test cases for scheduler performance, addressing flaky tests, and updating documentation to highlight inconsistencies.
- Versioning and Configuration Improvements: Improvements in versioning and configuration ensure consistent behavior in Kubernetes. This includes using a static version file for versioning and graduating KEP 4633 to stable status for configurable endpoints.
- Miscellaneous Enhancements: Various enhancements include removing misleading comments, addressing staging module issues, and preventing logging of unexpected errors during normal execution.
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 64
Key Closed Pull Requests
1. Fixes for bad handling of pointers and aliases in validation: This pull request addresses inconsistencies in the handling and validation of pointers and typedefs within the Kubernetes codebase by adding missing test cases, fixing implementations, and standardizing the definition of Context.Type
for typedefs to ensure validators consistently unalias and unpointer types, as demonstrated through a series of commits that include adding tests, correcting validation logic, and disallowing pointers as listmap keys.
- URL: pull/131399
- Merged: 2025-05-06T02:53:13Z
2. Accumulated cleanups of validation-gen: This pull request involves a series of cleanup commits aimed at improving the validation generation process in the Kubernetes project, including fixing lint warnings, optimizing the hasValidations
function to return cached values first, reorganizing code blocks for handling named types, renaming the discover()
function to discoverType()
, enhancing logging and debugging capabilities, and correcting a typo in a comment.
- URL: pull/131635
- Merged: 2025-05-07T18:31:25Z
3. Declarative validation: Simplify handling of subresources: This pull request simplifies the handling of subresources in the Kubernetes project by funneling all validation of a kind into a single validation function, removing special cases for subresources that share a kind, and making the validation conditional, as groundwork for a future enhancement.
- URL: pull/131560
- Merged: 2025-05-06T16:29:20Z
Other Closed Pull Requests
- Kube-proxy Refactoring: This topic involves refactoring the kube-proxy's proxy update scheduling logic by introducing a new BoundedFrequencyRunner. The changes ensure updates occur only once per minimum interval and at least once per maximum interval, with a retry mechanism based on error returns.
- Documentation Updates: This topic covers updates to the Kubernetes project documentation, including the removal of alculquicondor from the list of sig-scheduling-api-reviewers. Additional changes include commenting out the now-empty sig-scheduling-api-reviewers section and designating sig-scheduling-api-approvers as actual approvers.
- Kubectl FeatureGate Cleanup: This topic involves cleaning up the Kubectl FeatureGate by removing the now-stable KUBECTL_ENABLE_CMD_SHADOW. It also updates KUBECTL_COMMAND_HEADERS to utilize the current FeatureGate mechanism and adds comments with links to KEPs for easier tracking.
- Bug Fixes in Kubernetes: This topic addresses various bug fixes in the Kubernetes project, including manually parsing the log verbosity flag to initialize klog during command preparation. It also includes stabilizing the Windows memory pressure eviction test by waiting for the memory-pressure taint to clear before running the test.
- Service CIDR Controller Refactoring: This topic involves refactoring the default service CIDR controller logic to improve readability and understanding. It also promotes aojea as an approver for the control plane and updates the list of approvers to reflect the current status of the project.
- Client-go Library Update: This topic introduces a new function,
WatchWithContextFunc
, to theclient-go
library in Kubernetes. It deprecates the existingWatchFuncWithContext
to address naming inconsistencies in context-aware interfaces and improve code clarity.
- Filesystem Resize Operations: This topic addresses the issue of inaccurate disk size readings by disabling the size checking process during filesystem resize operations for ext and xfs filesystems. It allows tools like
resize2fs
andxfs_grow
to handle expansions without pre-checking disk geometry.
- Golangci-lint v2 Migration Issues: This topic addresses critical issues that arose after migrating to golangci-lint v2, including fixing asynchronous linter output that was not visible in failure messages. It also reintroduces necessary linter suppressions for conversion and defaulting functions to adhere to Kubernetes naming conventions.
- Scheme Type Converter Reorganization: This topic reorganizes the scheme type converter by moving it from a testing package to the apimachinery utils. It addresses backward compatibility issues by changing the return type of
NewTypeConverter
to an interface and reducing duplications of theToRESTFriendlyName
function.
- Apiserver Error Context: This topic addresses the issue of apiserver errors lacking context when decoding a mutating webhook patch. It treats such decoding failures as errors in calling the webhook, providing more informative error messages that include the webhook's name and the nature of the failure.
- End-to-End Test Flakiness: This topic addresses and fixes issues related to end-to-end test flakiness and failures in the Kubernetes project. It updates the "must manage ResourceSlices" test to the v1beta2 API and adjusts the timeout for the "sequential update with pods replacing each other" test.
- Crashloop Backoff Key Update: This topic adds container resources to the crashloop backoff key in the Kubernetes project. It ensures that the backoff does not need to be manually reset when resource configurations change, as part of a cleanup effort.
- Container Iteration Utility: This topic introduces a new utility called
ContainerIter
to the Kubernetes project. It simplifies and cleans up the process of iterating over pod containers by providing a more straightforward interface compared to the existing method.
- Deployment Controller Optimization: This topic aims to optimize the deployment controller by introducing a ResourceVersion comparison in the updateDeployment function. It skips processing unchanged deployments during periodic resyncs, reducing unnecessary workload and pressure on the Informer Cache RW Lock.
- HPA Reviewers and Approvers Refresh: This topic aims to refresh the list of Horizontal Pod Autoscaler (HPA) reviewers and approvers in the Kubernetes project. It is part of a cleanup effort, as referenced in issue #128948, without introducing any user-facing changes.
- Logging Format String Fix: This topic addresses a cleanup task by fixing a non-constant format string issue in the
framework.Logf
call within the Kubernetes project. It ensures more consistent and reliable logging behavior.
- Kubelet Authorization Test Bug Fix: This topic addresses a bug and failing test by setting the appropriate groups in the SubjectAccessReview (SAR) used in the
WaitForAuthzUpdate()
function. It accurately simulates the service account’s identity in the Kubernetes kubelet authorization test.
- Kuberuntime Termination Order Test Flake: This topic addresses a flake issue in the Kubernetes project by increasing the delay in the kuberuntime termination order test. It prevents time rounding errors, as detailed in the commit with SHA 849924b6ba528b625465ef3ffc9bd40b8e4b9af5.
- Windows Proxy Regression Fix: This topic involves automated cherry-picks of a previous fix addressing a regression in Kubernetes version 1.31 on Windows Proxy. It resolves an issue where the Host Network Service (HNS) local endpoint was mistakenly deleted instead of the remote endpoint.
- Error Backoff Lock Contention: This topic addresses the issue of lock contention during error backoff in the Kubernetes project. It implements a lock-free mechanism, resulting in significant performance improvements across various goroutine benchmarks.
- Cgroup Verification Deduplication: This topic focuses on deduplicating cgroup verification in end-to-end tests for InPlacePodVerticalScaling and PodLevelResources within the Kubernetes project. It is part of an effort to streamline and improve the testing process.
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
Contributor | Commits | Pull Requests | Issues | Comments |
---|---|---|---|---|
pohly | 32 | 10 | 9 | 80 |
BenTheElder | 16 | 1 | 6 | 82 |
aojea | 17 | 4 | 5 | 72 |
bart0sh | 5 | 3 | 1 | 61 |
roseteromeo56 | 61 | 0 | 0 | 0 |
jpbetz | 24 | 7 | 2 | 18 |
liggitt | 2 | 0 | 1 | 37 |
danwinship | 4 | 2 | 0 | 34 |
carlory | 20 | 10 | 1 | 8 |
dims | 6 | 5 | 3 | 25 |