Weekly Project News

Subscribe
Archives

Weekly GitHub Report for Kubernetes: October 06, 2025 - October 13, 2025 (12:04:37)

Weekly GitHub Report for Kubernetes

Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.


Table of Contents

  • I. News
    • 1.1. Recent Version Releases
    • 1.2. Other Noteworthy Updates
  • II. Issues
    • 2.1. Top 5 Active Issues
    • 2.2. Top 5 Stale Issues
    • 2.3. Open Issues
    • 2.4. Closed Issues
    • 2.5. Issue Discussion Insights
  • III. Pull Requests
    • 3.1. Open Pull Requests
    • 3.2. Closed Pull Requests
    • 3.3. Pull Request Discussion Insights
  • IV. Contributors
    • 4.1. Contributors

I. News

1.1 Recent Version Releases:

The current version of this repository is v1.32.3

1.2 Version Information:

The Kubernetes version released on March 11, 2025, introduces key updates detailed in the official CHANGELOG, with additional binary downloads available. For comprehensive information on new features and changes, users are encouraged to consult the Kubernetes announce forum and the linked CHANGELOG.

II. Issues

2.1 Top 5 Active Issues:

We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.

  1. improved preemption logic for the kubernetes scheduler: This issue requests an enhancement to the Kubernetes scheduler's preemption logic to allow multiple pods to share the resources of a single preempted pod, rather than only one pod immediately replacing it. This change aims to reduce scheduling delays and prevent unnecessary cluster autoscaler scale-ups caused by the current limitation where only one pod can be nominated to a node after preemption.

    • The comments discuss the feasibility and implications of allowing multiple pods to receive a nominated node before preemption completes, with concerns about scheduling delays and autoscaler behavior. Contributors express support for the idea, suggest treating pods marked for deletion as non-existent during preemption simulation, and mention potential overlaps with ongoing related work, while encouraging contributions to implement the feature.
    • Number of comments this week: 12
  2. [Flaking test] [InPlacePodVerticalScaling] Pod Resize deferred tests are flaking: This issue addresses flakiness in the Pod Resize deferred tests within the InPlacePodVerticalScaling feature, suggesting that the instability is likely due to how the tests are written rather than a problem with the feature itself. The discussion focuses on potential timing issues, race conditions in resource calculations, and the need for improved logging and timeout adjustments to better diagnose and stabilize the tests.

    • The comments include offers to investigate and suggestions for fixes such as adding retry logic, adjusting timeouts, and enhancing logging; there is also a request for more detailed evidence before implementing changes, leading to a reassessment of the issue’s priority from important-soon to important-longterm due to the tests’ complexity and relative flakiness.
    • Number of comments this week: 8
  3. [Flaking Test] k8s.io/kubernetes/test/integration/etcd.etcd: This issue reports flakiness in the Kubernetes integration test k8s.io/kubernetes/test/integration/etcd.etcd observed in the master-blocking job, with failures linked to repeated gRPC client connection closures during etcd operations. The root cause appears to be a leaked goroutine related to the apiextensions-apiserver finalizer, rather than the etcd integration tests themselves, causing intermittent test failures over several days.

    • The comments clarify that the etcd integration tests themselves pass consistently, and the flakiness is caused by a goroutine leak originating from the CRD finalizer in the apiextensions-apiserver component. Discussion includes historical failure frequency, potential impact of a recent pull request on timing, and debate on whether this flake should block releases, with no definitive resolution provided.
    • Number of comments this week: 8
  4. kubectl get ingressclass should display (default) marker like storageclass: This issue requests that the kubectl get ingressclass command display a (default) marker next to the default IngressClass, similar to how kubectl get storageclass shows the default StorageClass. This enhancement aims to improve consistency and user experience by making it easier to identify the default IngressClass without needing to inspect annotations manually.

    • The comments include assignment of the issue, tagging relevant SIGs for approval, and discussion confirming the feature request is reasonable and uncontroversial, leading to triage acceptance after clarifying label application syntax.
    • Number of comments this week: 7
  5. [Umbrella] KEP-4671: Gang Scheduling for v1.35: This issue tracks the implementation of Gang Scheduling for Kubernetes version 1.35, detailing a series of required changes that may be combined into multiple pull requests. It serves as an umbrella to coordinate and monitor progress on several related tasks assigned primarily to one contributor.

    • The comments section consists solely of multiple brief acknowledgments and tagging to notify relevant parties, indicating an ongoing effort to keep stakeholders informed without detailed discussion.
    • Number of comments this week: 6

2.2 Top 5 Stale Issues:

We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.

  1. Zone-aware down scaling behavior: This issue describes a problem with the horizontal pod autoscaler's (HPA) scale-in behavior in a Kubernetes deployment that uses topology spread constraints to evenly distribute pods across multiple zones. Specifically, during scale-in events, the pods become unevenly distributed with one zone having significantly fewer pods than allowed by the maxSkew: 1 setting, causing high CPU usage on the lone pod in that zone and violating the expected balanced pod distribution.
  2. apimachinery's unstructured converter panics if the destination struct contains private fields: This issue describes a panic occurring in the apimachinery's DefaultUnstructuredConverter when it attempts to convert an unstructured object into a destination struct that contains private (non-exported) fields. The reporter expects the converter to safely ignore these private fields instead of panicking, as the current behavior causes failures especially with protobuf-generated gRPC structs that include private fields for internal state.
  3. Integration tests for kubelet image credential provider: This issue proposes adding integration tests for the kubelet image credential provider, similar to the existing tests for client-go credential plugins. It suggests that since there are already integration tests for pod certificate functionality, implementing tests for kubelet credential plugins would be a logical and beneficial extension.
  4. conversion-gen generates code that leads to panics when fields are accessed after conversion: This issue describes a bug in the conversion-gen tool where it generates incorrect conversion code for structs that have changed field types between API versions, specifically causing unsafe pointer conversions instead of properly calling the conversion functions. As a result, accessing certain fields like ExclusiveMaximum after conversion leads to runtime panics, highlighting the need for conversion-gen to produce safe and correct code to prevent such crashes.
  5. Failure cluster [ff7a6495...] TestProgressNotify fails when etcd in k/k upgraded to 3.6.2: This issue describes a failure in the TestProgressNotify test that occurs when the etcd component in the Kubernetes project is upgraded to version 3.6.2. The test times out after 30 seconds waiting on a result channel, with multiple errors indicating that the embedded etcd server fails to set up serving due to closed network connections and server shutdowns.

2.3 Open Issues

This section lists, groups, and then summarizes issues that were created within the last week in the repository.

Issues Opened This Week: 34

Summarized Issues:

  • Flaky and Intermittent Test Failures: Several Kubernetes tests are experiencing flaky or intermittent failures, including integration tests for etcd with goroutine leaks and context deadline exceeded errors, scheduler preemption tests failing due to context cancellation, and storage version garbage collector tests intermittently not returning expected errors. These flaky tests cause inconsistent CI results and complicate debugging efforts, highlighting timing, resource calculation, and concurrency issues in test code and components.
    • issues/134430, issues/134455, issues/134458, issues/134468, issues/134492
  • Scheduler and Scheduling Enhancements: Multiple issues focus on improving Kubernetes scheduling capabilities, including proposals for Gang Scheduling implementation, Workload API creation, workload tracking, and adding WorkloadReference fields to pods. These enhancements aim to provide more flexible, efficient, and feature-rich scheduling, addressing current limitations and enabling advanced scheduling scenarios.
    • issues/134428, issues/134471, issues/134472, issues/134473, issues/134474, issues/134475, issues/134476, issues/134477, issues/134478
  • Resource Management and Metrics Issues: There are several problems related to resource reporting and management, including missing pod-level resource health status, lack of pod-level resource management similar to container-level limits, duplicate pod metrics causing inflated CPU usage, and missing memory usage metrics on Windows nodes. These issues affect accurate resource tracking and scheduling decisions, impacting cluster stability and observability.
    • issues/134482, issues/134513, issues/134518, issues/134522
  • Kubelet and Pod Readiness Conditions: The kubelet incorrectly sets the PodReadyToStartContainers condition only after the container image is pulled rather than immediately after sandbox creation and network setup, causing delays in pod readiness reporting. This discrepancy leads to slower readiness status updates, especially for large container images, affecting pod lifecycle management and monitoring.
    • issues/134460
  • Event and Metadata Reporting Bugs: Kubernetes event reporting has issues where events related to nodes are excluded if the involvedObject lacks an apiVersion, and node reboot events have incorrect metadata with missing apiVersion and improper UID fields. These bugs cause incomplete or inaccurate event visibility and tracking, complicating cluster diagnostics and auditing.
    • issues/134490, issues/134503
  • Etcd and Apiserver Resource Usage and Data Races: There are data races detected in etcd3 storage watcher code and significant memory usage increases in etcd during upgrades due to tight loops updating APIService keys. These issues indicate concurrency and resource management problems in core components, potentially impacting cluster stability and performance.
    • issues/134427, issues/134437
  • Feature Gate and Code Cleanup Requests: The DynamicResourceAllocation feature gate is requested to be removed in Kubernetes 1.38 since it has been locked on by default since 1.35, aiming to simplify the codebase and reduce maintenance overhead. This cleanup reflects ongoing efforts to streamline Kubernetes features and configurations.
    • issues/134459
  • Lease and ResourceClaim Enhancements: Issues highlight missing support for configurable lease durations in LeaseCandidate types and requests for aggregate resource requests in ResourceClaim models, such as total GPU memory across devices. These enhancements are needed to improve resource allocation flexibility and user control in Kubernetes.
    • issues/134486, issues/134491
  • Kube-controller-manager Job Status Errors: An unhandled error occurs in the kube-controller-manager logs when resuming suspended jobs due to invalid job status updates that improperly modify the required startTime field, causing repeated synchronization errors. This bug affects job lifecycle management and controller stability.
    • issues/134521
  • Pod Affinity Scheduling Logic Bug: The current pod affinity implementation incorrectly requires all requiredDuringSchedulingIgnoredDuringExecution rules to match the same pod, preventing scheduling on nodes that satisfy each rule separately, which contradicts documented API behavior. This bug limits scheduling flexibility and can cause unexpected pod placement failures.
    • issues/134534
  • ClusterIP Allocator Metric Bug: The clusterip_allocator reports negative available_ips metrics when multiple service CIDRs are used due to incorrect aggregation of IP allocations, leading to misleading metrics and potential resource tracking errors.
    • issues/134509
  • DRA Kubelet Plugin Missing ShareID Support: The DRA kubelet plugin lacks support for ShareID information, which is necessary to uniquely identify shared devices and prevent container creation failures caused by duplicate device IDs when using the DRAConsumableCapacity feature.
    • issues/134519
  • EC2 Device Plugin E2E Test Failures: The Kubernetes end-to-end test suite for the EC2 device plugin with GPU support has been failing for weeks due to nodes not becoming healthy, possibly caused by cloud-init failures, with no current lead assigned to resolve the issue.
    • issues/134483
  • IngressClass Shortname Addition Request: There is a request to add the shortname "ic" for the IngressClass resource to simplify and standardize command-line usage, aligning it with other Kubernetes resources like StorageClass.
    • issues/134527
  • Build Job Failure Due to Unbound Variable: A failing test in the ci-kubernetes-build.Overall job is caused by an unbound variable error during the 'make clean' command, linked to a recent pull request and causing build failures since late September 2025.
    • issues/134538
  • Validation Function Simplification Investigation: An investigation is underway to determine if only ValidateObjectMetaUpdate is needed during update validation instead of both ValidateObjectMeta and ValidateObjectMetaUpdate, aiming to simplify code and improve test coverage.
    • issues/134444

2.4 Closed Issues

This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.

Issues Closed This Week: 6

Summarized Issues:

  • Authorization and Scalability Failures: The Kubernetes scalability tests are failing primarily due to authorization failures when pulling container images, caused by expired or deleted service account keys. This issue affects containerd’s ability to authenticate with the artifact registry, leading to failures in the ci-kubernetes-e2e-gce-scale-performance job.
  • issues/134429
  • Pod Scheduling and Toleration Propagation: Pods using ResourceClaims with a NoExecute taint toleration are scheduled correctly, but the toleration is not propagated to the allocation result. This causes the eviction controller to prematurely evict the Pod before it can run, impacting Pod stability.
  • issues/134434
  • StatefulSet Port Update Failure: Using kubectl apply to update the container ports section of a StatefulSet manifest fails to update ports correctly, retaining only the first port despite changes. This results in incorrect port configurations after manifest updates.
  • issues/134470
  • TLS Version Configuration Clarification: A proposal to enable TLS 1.3 by default was closed after clarifying that Kubernetes only sets the minimum TLS version to 1.2 by default. Kubernetes still allows negotiation up to TLS 1.3 but maintains conservative defaults to avoid breaking compatibility.
  • issues/134485
  • GPU-Related Test Failures: Multiple GPU-related end-to-end tests in the Kubernetes node SIG are failing, specifically in the gce-device-plugin-gpu-master job, with no identified cause since October 8, 2025. These failures affect the reliability of GPU support testing in Kubernetes.
  • issues/134494
  • Flaky GPU Matrix Multiplication Test: A GPU-based matrix multiplication test intermittently times out, causing flakes in the sig-release-master-blocking job for the gce-device-plugin-gpu-master. The flakiness was first observed on September 25, 2025, and continues through October 10, 2025.
  • issues/134517

2.5 Issue Discussion Insights

This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.

Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.


III. Pull Requests

3.1 Open Pull Requests

This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.

Pull Requests Opened This Week: 62

Key Open Pull Requests

1. [WIP] Validation gen: This pull request introduces and iteratively improves the validation-gen tool for Kubernetes, enhancing the generation and ratcheting of declarative validation code, updating internal APIs and feature gates, refining validators for various tags and types, adding tests and documentation, and fixing issues related to code generation and validation logic.

  • URL: pull/134484
  • Merged: No
  • Associated Commits: 1290f, 74301, 4e0b6, 9a873, 711cf, 9fb1e, 56205, 4409f, 67ba4, 63ef0, e374f, b14e8, a36c6, b4d60, 67f57, b8827, 3b581, 10125, b2b55, a9f6d, eeddf, abf54, db99e, a722a, 48aaf, f8da3, 7ff6b, 6af9e, 54366, 31589, 9d803, db7b7, eacf0, 8d97c, 77b04, f649c, 672d9, 1c8f9, c9550, f52ff, f0c09, d23da, d3847, 480fa, 242b4, ea9f9, c5bf2, 873f6, a3342, 47b2d, 7e574, c35b9, f46d0, 76221, d57d8, 56046, 82ae1, ee8e5, 4546b, 2b22d, cb1e2, 7bdb8, e317a, fde2e, 78ae6, e4398, 4d194, 63e91, f3ab5, 0e2ca, 84f23, 16c25, 9a815, 18ae6, 9df5a, 65169, 156ba, 1b4ba, eb646, d4356, 2b312, 3b530, 95816, 93fea, 5463e, 9d76d, 20f12, 02320, 5b2ba, 74cbd, 7b1c8, 3bc83, 9607a, 50a2d, 8e06c, 36b1b, ee98b, c95b5, d5f30, f2d33, a7c59, 38073, 0693f, 2f69d, e28ef, 2e3fe, 740fb, 45ed3, 77a17, cc607, 84748, f64f7, 0c4bd, fc92e, b2e81, c51a7, 4a322, 05528, eaa4c, 67616, 911ea, 042eb, 293ad

2. WIP: KEP-5589: gogo fork: This pull request forks the gogo protobuf generation at version 1.3.2 into an internal code-generator package, rewrites imports and regenerates test data to ensure tests pass, updates all gogo imports to use the internal package, and aims to remove unused gogo plugins and functionality to eliminate code-generator dependencies on gogo protobuf generation.

  • URL: pull/134530
  • Merged: No
  • Associated Commits: 504d0, 0c29a, 37567, 62e81, 56be7, 07ca9, 4c6da, cbf26, 3a889, ebc19, fb805, 4cc7c, ce438

3. Enable Declarative Validation for ClusterRole: This pull request enables the declarative validation framework for the ClusterRole API group in Kubernetes by adding +k8s:validation-gen tags to the RBAC v1, v1alpha1, and v1beta1 packages and updating the ClusterRole strategy to invoke generated declarative validation in the Validate and ValidateUpdate methods, while maintaining existing validation behavior unless specific feature gates are enabled.

  • URL: pull/134537
  • Merged: No
  • Associated Commits: 3f8a5, 957e0, 88920, a1cb1, a77f5, e2ba9, c8ce2, 997f5, ced03, 7246b

Other Open Pull Requests

  • API Validation Migration for Device Fields: Multiple pull requests migrate uniqueness and validation logic for device-related fields in the resource.k8s.io API group to declarative validation using tags like +k8s:enum and validation tags. These changes improve maintainability and explicitness of API validation for fields such as DeviceClaim.Requests, DeviceRequest.FirstAvailable, DeviceConstraint.Requests, DeviceClaimConfiguration.Requests, and ResourceClaim DeviceAllocationMode.
    • pull/134496, pull/134443, pull/134446, pull/134489
  • Kubernetes Build Process and Environment Updates: Several pull requests focus on improving the Kubernetes build process by removing rsync and data containers, running builds directly with kube-cross, and upgrading the build environment to Go 1.25.2. These updates also include dependency cleanup and bumping container images to support faster patching and cleaner workflows.
    • pull/134510, pull/134487, pull/134501
  • Node and Scheduler Improvements: Pull requests address node conformance test restoration by reintroducing a fake registry, fix kubelet node rejection bugs by implementing synchronous node fetch and cache updates, and improve scheduler performance by caching NodeInfo in the Kubelet admission handler. Additionally, a new EarlyNominate step is added to the preemption scheduling mechanism to reduce wait times after preemption.
    • pull/134453, pull/134445, pull/134462, pull/134543
  • Network Policy and Ingress Testing Enhancements: A pull request introduces an end-to-end test verifying that a default-deny ingress NetworkPolicy blocks north–south traffic with ExternalTrafficPolicy=Local, reorganizes network policy tests to align with existing patterns, and removes special SNAT handling to ensure accurate failure detection.
    • pull/134448
  • PreferSameTrafficDistribution Feature GA and Deprecation: This update promotes the PreferSameTrafficDistribution feature to GA, deprecates the original PreferClose value in favor of PreferSameZone, updates documentation and integration tests, and adds warnings for deprecated values.
    • pull/134457
  • Storage Version Migration API Updates: The Storage Version Migration API is updated to beta by copying it to v1beta1 and removing unused fields to streamline the API, accompanied by a fix for flaky storage version tests by replacing fixed sleeps with polling.
    • pull/134480, pull/134432
  • Queueinghint Feature Pre-submit Testing on ppc64le: Two work-in-progress pull requests run pre-submit tests for the queueinghint feature on the ppc64le architecture, including logging test times and setting GOMAXPROCS to 4.
    • pull/134435, pull/134436
  • IP Allocator Bug Fixes: A pull request fixes critical IP allocator issues by adding CIDR filtering to count only IPs within the allocator's range and implementing overflow protection in the Free() method, improving accuracy and stability in large clusters.
    • pull/134516
  • Kubelet Plugin API Enhancement: Support for ShareID is added to the kubelet plugin API to enhance device resource allocation capabilities.
    • pull/134520
  • DRA Device Health and Metrics Improvements: Enhancements include adding an optional message field to DeviceHealth protobuf events for better health status tracking and introducing new metrics for DRAExtendedResource, such as a source label and a total resource claim creation metric.
    • pull/134506, pull/134523
  • etcd Dependency and Manifest Updates: Updates include bumping the etcd dependency to version 3.6.5 for testing in release-1.34 and cherry-picking a fix for the etcd manifest in the GCE environment.
    • pull/134426, pull/134431
  • kubectl Delete Command Concurrency Feature: A new --concurrency flag is introduced to kubectl delete, allowing users to specify concurrent workers for delete requests, improving deletion efficiency by always using a queue and worker pool.
    • pull/134438

3.2 Closed Pull Requests

This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.

Pull Requests Closed This Week: 18

Key Closed Pull Requests

1. Eugene.sergeichyk/etcd client outlier detect: This pull request introduces an outlier detection feature in the etcd gRPC client by adding a loadBalancingConfig with outlier_detection support and integrates the google.golang.org/grpc/xds package to enhance client-side load balancing capabilities.

  • URL: pull/134512
  • Merged: No
  • Associated Commits: f9e93, 07b94, f86a7

2. Revert "Merge pull request #134178 from HirazawaUi/remove-RootlessControlPlane“: This pull request reverts a previous merge that removed the RootlessControlPlane feature by undoing the specific commit that introduced those changes, effectively restoring the codebase to its prior state.

  • URL: pull/134524
  • Merged: Yes
  • Associated Commits: d6dec, 7b4d4

3. Fix incorrect error messages: This pull request fixes incorrect error messages that were introduced as an oversight in a previous change, improving the accuracy and clarity of error reporting in the Kubernetes project.

  • URL: pull/134425
  • Merged: Yes
  • Associated Commits: f9a89

Other Closed Pull Requests

  • Volume and Persistent Volume Improvements: Multiple pull requests address issues and enhancements related to volumes. One fixes a polling issue in integration tests for persistent volumes to prevent test failures under load, while another introduces a new VolumeModifying condition and updates end-to-end tests to support the newer resizer functionality, with an additional cherry pick of this change to a release branch.
    [pull/134440, pull/134456, pull/134507]
  • Build and Cleanup Process Enhancements: Several pull requests improve the build and cleanup processes. These include speeding up make clean by limiting recursive permission changes, removing the dependency on GNU tar for cleanup to support macOS environments, and fixing a panic in the cron parsing logic used in Kubernetes.
    [pull/134511, pull/134526, pull/134532]
  • API and Controller Refactoring: A pull request refactors the APIApprovalController in the apiextensions-apiserver by adding a RunWithContext method to support cancellation and contextual logging, while maintaining backward compatibility with the original Run method. This change is part of a broader effort to enable contextual logging across Kubernetes components.
    [pull/134449]
  • Tolerations and Scheduler Fixes: Pull requests improve toleration handling and scheduler behavior. One adds validation to the key field of the DeviceToleration struct to ensure keys conform to Kubernetes label formats, while another fixes a scheduler bug where NoExecute taint tolerations were not copied into pod status, preventing immediate eviction and adding logging and example YAMLs for issue reproduction.
    [pull/134465, pull/134479]
  • Configuration and Environment Fixes: Some pull requests address configuration and environment issues. One reverts an etcd update due to a broken Windows test, another fixes the containerd systemd unit PATH environment variable to restore proper program access, and a third improves kubeup environment parameters by prioritizing unified OS image families and projects to fix scale job failures.
    [pull/134447, pull/134461, pull/134488]
  • GPU Installer Update: A pull request updates the cos-gpu-installer to support the latest COS 121 release, resolving GPU driver installation failures caused by the upgrade to the newest COS long-term support milestone.
    [pull/134495]

3.3 Pull Request Discussion Insights

This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.

Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.


IV. Contributors

4.1 Contributors

Active Contributors:

We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.

If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.

Contributor Commits Pull Requests Issues Comments
pohly 20 8 11 58
yongruilin 75 4 1 7
aaron-prindle 47 10 2 28
BenTheElder 19 4 4 49
macsko 16 8 10 22
thockin 52 1 0 2
lalitc375 31 8 0 15
liggitt 23 2 0 28
aojea 3 3 0 33
p0lyn0mial 29 4 0 0

Don't miss what's next. Subscribe to Weekly Project News:
Powered by Buttondown, the easiest way to start and grow your newsletter.