Weekly Project News

Subscribe
Archives

Weekly GitHub Report for Kubernetes: September 01, 2025 - September 08, 2025 (12:04:55)

Weekly GitHub Report for Kubernetes

Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.


Table of Contents

  • I. News
    • 1.1. Recent Version Releases
    • 1.2. Other Noteworthy Updates
  • II. Issues
    • 2.1. Top 5 Active Issues
    • 2.2. Top 5 Stale Issues
    • 2.3. Open Issues
    • 2.4. Closed Issues
    • 2.5. Issue Discussion Insights
  • III. Pull Requests
    • 3.1. Open Pull Requests
    • 3.2. Closed Pull Requests
    • 3.3. Pull Request Discussion Insights
  • IV. Contributors
    • 4.1. Contributors

I. News

1.1 Recent Version Releases:

The current version of this repository is v1.32.3

1.2 Version Information:

The Kubernetes 1.32 release, announced on March 11, 2025, introduces several key updates and improvements detailed in the official CHANGELOG, with additional binary downloads available. This version continues to enhance Kubernetes' functionality and stability, reflecting ongoing development trends in the platform.

II. Issues

2.1 Top 5 Active Issues:

We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.

  1. client can run into infinite loop when watch ends with too old resource version: This issue describes a scenario where a Kubernetes client using pod reflector/informer can enter an infinite loop when a watch ends with a "too old resource version" error due to the client continuing to relist with a stale resource version that is not reset. The problem arises because paginated list responses return the same resource version as requested, preventing the client from recovering and causing repeated watch failures when switching between API servers with compacted resource versions.

    • The comments discuss the root cause being the paginated list response echoing the requested resource version, leading to the infinite loop. Two potential solutions are proposed: resetting the lastSyncResourceVersion on the client side or blacklisting stale resource versions on the apiserver side. Participants clarify how pagination is applied in client-go, confirm the behavior of relist requests, and analyze relevant code paths, ultimately agreeing that the infinite loop stems from the interaction between paginated lists and stale resource versions.
    • Number of comments this week: 13
  2. [Flaking Test] integration TestConcurrencyIsolation: This issue reports on the flakiness of the integration test "TestConcurrencyIsolation" in the Kubernetes project, which has shown increased failure rates since August 29, particularly around September 2-3. The failures appear related to concurrency and priority class configurations, with discussions exploring potential causes including recent infrastructure changes, test environment performance variations, and recent pull requests affecting CPU allocation and priority classes.

    • The comments discuss possible causes for the flakiness, including a reduction in CPU cores in the test environment and recent code changes, but no definitive cause is identified. Contributors note that the test runs on faster newer VMs despite fewer cores, and the flake rate has since decreased, suggesting an infrastructure issue; ongoing efforts aim to monitor test performance on less powerful hardware to better understand and mitigate the flakiness.
    • Number of comments this week: 9
  3. Add support for tracing in scheduler: This issue requests adding support for OpenTelemetry (OTEL) tracing in the Kubernetes scheduler to generate traces for individual pod scheduling attempts. This enhancement aims to improve the investigation of slow scheduling by providing detailed trace context that can be propagated to custom scheduler plugins, offering better visualization and integration compared to the existing log-based utiltrace.

    • The comments include initial SIG scheduling and instrumentation labels, discussions about the need for expertise in OTEL tracing, concerns about potential performance impacts, and suggestions on sampling rates to mitigate overhead. Contributors express willingness to assist, share preliminary instrumentation attempts, and indicate plans for further development through a KEP or direct code contributions.
    • Number of comments this week: 7
  4. The PodAndContainerStatsFromCRI feature doesn’t expose metrics.: This issue concerns the PodAndContainerStatsFromCRI feature in Kubernetes, which currently does not expose container metrics through the /metrics/cadvisor endpoint as expected. The user is seeking clarification on how to properly observe CRI-provided metrics, whether all cAdvisor metrics have been implemented by CRI, and the future of cAdvisor in light of this feature.

    • The comments clarify that containerd does not yet support these metrics, which is why the feature is still in alpha, and suggest trying CRI-O as an alternative for testing; the discussion includes references to related issues and expresses appreciation for the guidance provided.
    • Number of comments this week: 6
  5. Version 1.34 changelog added an incorrect PR link: This issue reports that the changelog for version 1.34 contains an incorrect pull request link, where the description in the changelog does not accurately reflect the changes made in the referenced PR #133018. The reporter requests an update to the changelog to link to the correct PR or to correct the description to match the actual changes introduced.

    • The comments clarify confusion about whether the issue is with the PR's release note or the changelog link itself, confirming that the changelog description does not correspond to the actual PR content. Participants agree that the changelog sentence is misleading and suggest a more accurate description that reflects the PR’s true purpose, which is adding port names to the kubectl describe pod output.
    • Number of comments this week: 5

2.2 Top 5 Stale Issues:

We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.

  1. Zone-aware down scaling behavior: This issue describes a problem with the horizontal pod autoscaler's (HPA) scale-in behavior in a multi-zone Kubernetes deployment, where the expected even distribution of pods across zones (enforced by topology spread constraints with maxSkew: 1) is not maintained. Specifically, during scale-in events, pods become unevenly distributed, resulting in one zone having significantly fewer pods and causing high CPU usage on the remaining pod in that zone, which deviates from the intended maximum skew of one pod difference per zone.
  2. apimachinery's unstructured converter panics if the destination struct contains private fields: This issue describes a panic occurring in the apimachinery's DefaultUnstructuredConverter when it attempts to convert an unstructured object into a destination struct that contains private (non-exported) fields. The reporter highlights that the converter should ideally skip these private fields instead of panicking, as this problem arises notably with protobuf-generated gRPC structs that include private fields for internal state, causing the conversion process to fail unexpectedly.
  3. Integration tests for kubelet image credential provider: This issue proposes adding integration tests for the kubelet image credential provider, similar to the existing tests for client-go credential plugins. It suggests that since there are already integration tests for pod certificate functionality, implementing tests for the kubelet credential plugins would be a logical and beneficial extension.
  4. conversion-gen generates code that leads to panics when fields are accessed after conversion: This issue describes a bug in the conversion-gen tool where it generates incorrect conversion code for structs that have changed field types between API versions, specifically causing unsafe pointer conversions instead of properly calling the appropriate conversion functions. As a result, accessing certain fields like ExclusiveMaximum after conversion leads to runtime panics, highlighting the need for conversion-gen to produce safe and correct code to prevent such crashes.
  5. Failure cluster [ff7a6495...] TestProgressNotify fails when etcd in k/k upgraded to 3.6.2: This issue describes a failure in the TestProgressNotify test that occurs when the etcd component in the Kubernetes project is upgraded to version 3.6.2. The test times out after 30 seconds waiting on a result channel, with multiple errors indicating that the embedded etcd server fails to set up serving due to closed network connections and server shutdowns.

2.3 Open Issues

This section lists, groups, and then summarizes issues that were created within the last week in the repository.

Issues Opened This Week: 21

Summarized Issues:

  • Client-go Watch and Relist Loop Issues: The Kubernetes client-go can enter an infinite loop when a watch ends with a "too old resource version" error because it repeatedly attempts to relist using an outdated resource version that is not reset. This problem is especially pronounced with paginated list requests and apiserver restarts, causing persistent relisting failures.
  • issues/133810
  • Changelog and Documentation Accuracy: The version 1.34 changelog contains an incorrect pull request link and description that do not accurately reflect the changes made in PR #133018. An update is requested to correct the PR link and corresponding changelog entry to ensure documentation accuracy.
  • issues/133812
  • Scheduler Observability and Event Ordering: There are requests to add OpenTelemetry tracing support to the Kubernetes scheduler to generate detailed traces for pod scheduling attempts, aiding investigation of slow scheduling. Additionally, the AssumeCache in the scheduler may receive PersistentVolume update events later than main event handlers, causing scheduling decisions based on stale data and resulting in pods being stuck indefinitely.
  • issues/133819
  • Admission and Validation Policy Failures: The ValidatingAdmissionPolicyBinding fails to resolve ConfigMap parameters correctly when related resources are deleted and recreated together, causing policy validation errors. Also, migrating compound handwritten validation functions to a declarative framework is blocked due to mismatches in one-to-one comparison logic, complicating incremental migration.
  • issues/133827
  • Deployment Spec Validation and Blue-Green Deployments: There is a request to modify validation logic so that when progressDeadlineSeconds is set to math.MaxInt32, the check ensuring minReadySeconds is less than progressDeadlineSeconds is skipped. This change supports Blue-Green deployment scenarios where components like Kruise set very high progressDeadlineSeconds values to allow the blue version to exist without traffic for extended periods.
  • issues/133830
  • OOM Score Adjustment for Burstable Pods: The current fixed oom_score_adj value set at container creation causes less memory-intensive containers to be killed first during global out-of-memory events. A dynamic update mechanism is requested to prioritize termination of containers with memory usage closest to their limits, improving pod stability under memory pressure.
  • issues/133839
  • Memory Usage and Cache Management in kube-apiserver: The kube-apiserver consumes excessive memory when creating many objects, especially Custom Resource Definitions with multiple versions, due to caching and deserialization mechanisms that do not promptly release memory after deletions. This leads to unexpectedly high and persistent memory usage impacting resource efficiency.
  • issues/133846
  • Test Flakiness and Infrastructure Stability: The integration test "TestConcurrencyIsolation" has become flaky with increased failure rates since August 29, potentially linked to recent changes in priority classes and test infrastructure. Discussions focus on whether failures stem from test environment performance variations or recent code changes, affecting test reliability.
  • issues/133861
  • DNS Resolution Failures in CoreDNS: CoreDNS fails to resolve external domains for application pods because it does not properly direct the DNS resolver to the host files (/etc/resolv.conf). This misconfiguration causes resolution failures despite checks with coreFiles, impacting pod network functionality.
  • issues/133869
  • Cache and Storage Duplication and Resource Waste: Duplicate initializations of cache and etcd storage instances for resources like serviceaccounts and events lead to wasted resources and increased memory usage. Proposals include adding integration or end-to-end tests to ensure only a single cacher and storage instance runs per resource.
  • issues/133877
  • Metrics Exposure and Feature Flag Impact: Enabling the PodAndContainerStatsFromCRI feature causes the /metrics/cadvisor endpoint to no longer expose many container-related metrics. This leads to confusion about how to observe these metrics properly and questions about the completeness and future of cAdvisor support in Kubernetes.
  • issues/133884
  • Admission Plugin Reusability: There is a request to move the OwnerReferencesPermissionEnforcement admission plugin to the k8s.io/apiserver module to enable reuse by external extension API servers. This would allow consistent enforcement of permission checks on metadata.ownerReferences and blockOwnerDeletion fields across different API servers.
  • issues/133891
  • CSI Volume Permission Modification Delays: The kubelet repeatedly and recursively modifies permissions of mounted CSI volume directories and contents due to mismatched expected and actual permissions. This causes significant delays when mounting volumes with many files on slow disks, impacting pod startup times.
  • issues/133892
  • Helm Hook Job Completion Issues: A Helm pre-install/pre-upgrade hook Job uses the SuccessCriteriaMet condition instead of the Complete condition in its SuccessPolicy, causing the Job to never reach a complete state. This prevents Deployment creation because terminating pods are not cleaned up properly.
  • issues/133903
  • etcd and Go Version Compatibility: Kubernetes version 1.33 lacks support for the PQC hybrid key exchange (X25519MLKEM768) in etcd because it uses Go 1.23, which does not support this key exchange. There is uncertainty whether the updated etcd version with Go 1.24.6 will be included in Kubernetes 1.33 or deferred to 1.34.
  • issues/133909
  • kubectl Port-Forward Compatibility: The kubectl port-forward command initially fails with a 400 Bad Request error due to a missing required "port" query parameter during WebSocket upgrade attempts on older Kubernetes server versions. It then successfully falls back to using the SPDY protocol for port forwarding, ensuring compatibility.
  • issues/133913
  • crictl Test Failures Due to Hardcoded Paths: A crictl test on kOps fails because the hardcoded path /home/kubernetes/bin/crictl is not found, causing the [sig-node] crictl should be able to run crictl on the node test to fail since August 26, 2025. It is suggested that crictl might be better treated as a feature-specific test since it is not required for node operation.
  • issues/133915
  • Client-go ConfigFlags Validation Errors: Setting CertFile and KeyFile fields in ConfigFlags to override kubeconfig data containing inline client certificate and key data causes validation errors during merge. This happens because both file-based and inline certificate data are specified simultaneously, and ConfigFlags does not support setting inline data fields to nil to properly override them.
  • issues/133916

2.4 Closed Issues

This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.

Issues Closed This Week: 18

Summarized Issues:

  • Performance regressions in Kubernetes events and API server: The Kubernetes 1.34 release experienced increased latency for POST requests to the events resource at large scale, likely due to etcd or infrastructure changes, impacting expected performance. Additionally, the API server logs show recurring "Error getting keys" messages linked to a new etcd3 method, indicating underlying stability issues.
  • [issues/133600, issues/133787]
  • Code quality and testing improvements: There is a need to run gofmt -s on generated Go code to simplify and standardize output, reducing complexity in code generators. Furthermore, adding unit test coverage to core KubeVela packages aims to improve code quality and prevent regressions by verifying correctness of changes.
  • [issues/133609, issues/133835]
  • Networking and connection management issues: Windows kube-proxy lacks a configurable TCP idle timeout similar to Linux, causing idle connections to drop after about four minutes. Also, the deprecated SPDY POST endpoints lack conformance test coverage due to kubectl's protocol transition, raising concerns about maintaining these unreliable endpoints.
  • [issues/133639, issues/133689]
  • Test failures and CI job disruptions: Pod Resources API tests fail on crio serial lanes due to a panic from assigning to a nil map, causing all podresource API tests to fail. Additionally, the Kubernetes future dependencies CI job broke after updating prometheus/common, requiring dependency updates to fix test panics.
  • [issues/133658, issues/133878]
  • Access control and role permission gaps: The new events.events.k8s.io API is not included in the default Kubernetes view cluster role, unlike the core events API, resulting in insufficient permissions for users with the default view role.
  • [issues/133701]
  • Platform and tooling compatibility issues: Running hack/local-up-cluster.sh with START_MODE=kubeletonly on MacOS fails because kubelet is unsupported on that OS. Also, the spf13/pflag 1.0.8 release renamed a public type causing compilation errors in Kubernetes master, necessitating dependency updates.
  • [issues/133795, issues/133809]
  • User experience and configuration guidance: Users seek ways to configure Ingress resources to redirect unknown root paths to 404 pages instead of default backends, highlighting a need for clearer configuration options. The announcement of the Kubernetes Platform Engineer MCP Server introduces an AI-powered assistant to aid in troubleshooting and automation for platform engineering.
  • [issues/133820, issues/133821]
  • Metrics and CLI regressions: Kubernetes 1.34 removed previously available alpha kubelet volume stats metrics from the node metrics API, causing a regression and lack of documentation. The bash tab completion for kubectl is broken in v1.34.0, causing unexpected behavior and requiring a fix.
  • [issues/133847, issues/133864]
  • API validation and security vulnerabilities: Kubernetes CRDs incorrectly emit validation warnings for valid numeric formats, revealing gaps in format recognition and enforcement. A security vulnerability in the secrets-store-sync-controller exposed service account tokens in logs, potentially allowing unauthorized access to secrets, which has been fixed in later versions.
  • [issues/133880, issues/133897]
  • Security hardening in configuration examples: Kubernetes configuration examples lack essential security hardening measures such as security contexts, network policies, resource quotas, and improved service account permissions, which are necessary to mitigate risks from insecure defaults and promote secure container deployments.
  • [issues/133914]

2.5 Issue Discussion Insights

This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.

Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.


III. Pull Requests

3.1 Open Pull Requests

This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.

Pull Requests Opened This Week: 44

Key Open Pull Requests

1. [WIP] fake client watch supports watchlist: This pull request enhances the fake client in the Kubernetes client-go testing framework to support watchlist functionality by passing GroupVersionKind (GVK) information through various watch-related components and adding corresponding tests to improve informer synchronization and watch behavior simulation.

  • URL: pull/133852
  • Merged: No
  • Associated Commits: 022b0, 23c07, 3fe66, abe3e, 4c62b, 862b9, 102de, ae2fd, 5915a, 613cc, 01359, fc340, 93aec, 6dbe4, 6918a, 10b55, b15d6, 94727, 90052, d7b87, 00df6, 72154, a6bf4, f6c8c, 8be54

2. Refactor pkg/scheduler/backend/queue tests to use mocks instead of a real metric registry: This pull request refactors the scheduler backend queue tests by replacing real metric registry usage with mock MetricAsyncRecorder implementations to enable faster, more isolated unit tests that avoid dependencies on the full metrics infrastructure and allow parallel test execution.

  • URL: pull/133906
  • Merged: No
  • Associated Commits: e6bbc, 9acb4, 8ae8a, 23c0d, b3157

3. make v1 resource version first priority in resource: This pull request updates the Kubernetes client to prioritize using the v1 version over v1beta2 for the resource.k8s.io API group by default, addressing issue #133131.

  • URL: pull/133876
  • Merged: No
  • Associated Commits: 9fa1b, e4c3a, 5fad6, 71cc3

Other Open Pull Requests

  • Watch functionality update: This pull request refactors the Watch functionality to use the Etcd-Kubernetes contract interface methods List and Get via getList instead of directly calling the KV interface. This simplifies the code and enables easier future improvements such as adopting the clientv3.Watcher interface.
    pull/133908
  • Feature gate dependency enforcement: This pull request marks all Kubernetes feature dependencies explicitly and enforces the declaration of these dependencies for every feature through a unit test. This improves feature gate management and consistency across the codebase.
    pull/133912
  • Bug fix in ResourceIndexer List method: This pull request fixes a nil pointer dereference in the ResourceIndexer List method and improves error handling to prevent runtime crashes. The fix addresses the issue described in https://github.com/kubernetes/kubernetes/issues/133788.
    pull/133843
  • End-to-end test log enhancement: This pull request adds an end-to-end test enhancement that retrieves and checks logs from all containers in the "kube-system" namespace after a test run to detect and report data race errors as test failures. It triggers only by a dummy test to avoid affecting pure conformance runs and can also check for unexpected or excessive error log entries with failure messages formatted for GitHub issues.
    pull/133844
  • Kubelet /statusz endpoint listing: This pull request adds a list of available HTTP endpoints to the kubelet's /statusz page, improving visibility of active health, metrics, and debug endpoints. This enhancement aids better instrumentation and operational introspection.
    pull/133865
  • Context support and timeout for kubectl delete: This pull request introduces context support and a request-level timeout for delete operations in kubectl by adding new context-aware helper methods. This prevents delete commands from hanging indefinitely due to network issues, improving reliability and error handling.
    pull/133882
  • Kubelet metrics fix via cherry pick: This automated cherry pick fixes multiple Register calls in the kubelet metrics, resolving an issue where the volume stats collector was ignored. This restores the missing kubelet_volume_stats_* metrics.
    pull/133905
  • Multiprotocol test port update: This pull request updates the multiprotocol-test by changing the containerPort to 100 to avoid conflicts with an internal HTTP_PROXY intercepting traffic on port 80. This prevents test failures caused by Squid proxy interference.
    pull/133815
  • Test image base updates for security and consistency: Multiple pull requests update various test image bases from older versions to newer ones to incorporate security fixes and maintain consistency. These include updates to agnhost from 2.33 to 2.56, Debian Jessie to Bookworm, and Python and Windows base images to more current versions without functional changes.
    pull/133818, pull/133840, pull/133854, pull/133855, pull/133856, pull/133857, pull/133858, pull/133859
  • Removal of unnecessary build tag: This pull request removes the unnecessary usegocmp build tag that was originally added as a precautionary measure to allow reverting to an older implementation using "github.com/google/go-cmp/cmp". The tag has not been needed so far.
    pull/133828
  • Deployment spec validation fix: This pull request adds a condition to handle when the Deployment spec's progressDeadlineSeconds is set to the maximum 32-bit integer value. It ensures that in this case, the minReadySeconds validation against progressDeadlineSeconds is skipped.
    pull/133831
  • Documentation and command syntax updates: These pull requests update outdated URLs in Kubernetes API service resources documentation and fix the kubectl exec command syntax in tests to use the current recommended format. They also clarify the distinction between the cpuCFSQuotaPeriod kubelet configuration field and the CustomCPUCFSQuotaPeriod feature gate through updated comments and help texts.
    pull/133841, pull/133842, pull/133845
  • ClaimQueue and volumeQueue resync optimization: This pull request reduces backlog in the Kubernetes cluster's claimQueue and volumeQueue by modifying the resync method to enqueue Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) only when their status is not bound. This avoids unnecessary resyncing of already bound resources.
    pull/133848
  • DRA validation refactor: This pull request refactors the DRA validation by adding granular controls through the introduction of validationOptions and extending validation functions like validateSlice, validateSet, and validateMap. This facilitates the migration from handwritten to declarative validation.
    pull/133850
  • Conversion to context-aware wait methods: This pull request refactors the codebase by converting various instances of the wait.Until method to wait.UntilWithContext. This enables contextual logging as part of the effort described in issue #126379.
    pull/133853

3.2 Closed Pull Requests

This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.

Pull Requests Closed This Week: 75

Key Closed Pull Requests

1. Add +k8s:ifEnabled, +k8s:ifDisabled and +k8s:enumExclude tags: This pull request adds three new tags—+k8s:ifEnabled, +k8s:ifDisabled, and +k8s:enumExclude—to enable conditional declarative validations in Kubernetes by allowing validations to run based on feature option states and to conditionally exclude enum values.

  • URL: pull/133768
  • Merged: 2025-09-03T04:41:13Z
  • Associated Commits: ed170, 64d9d, 8435f, e8186, 5c955, 243f4, 15b29, e6ae0, d85ce

2. Automated cherry pick of #133425: Fix SELinux label comparison: This pull request is an automated cherry pick of a fix for SELinux label comparison in the Kubernetes controller manager, ensuring that SELinux warnings are correctly emitted for label conflicts involving the "level" component, which was previously not properly handled.

  • URL: pull/133745
  • Merged: 2025-09-04T15:31:17Z
  • Associated Commits: 3e75f, 98dca, 3ebbe

3. Automated cherry pick of #133425: Fix SELinux label comparison: This pull request is an automated cherry pick of a fix for SELinux label comparison in the Kubernetes controller manager, ensuring that SELinux label conflicts involving the "level" component are correctly detected to prevent missing event emissions related to SELinux warnings.

  • URL: pull/133746
  • Merged: 2025-09-04T09:27:25Z
  • Associated Commits: fe2d1, 2d6c2, c1a0f

Other Closed Pull Requests

3.3 Pull Request Discussion Insights

This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.

Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.


IV. Contributors

4.1 Contributors

Active Contributors:

We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.

If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.

Contributor Commits Pull Requests Issues Comments
BenTheElder 30 3 2 69
pohly 19 10 5 31
liggitt 0 0 0 63
dims 7 2 6 35
serathius 9 8 1 30
HirazawaUi 8 11 1 23
jpbetz 16 3 1 15
pacoxu 2 3 1 26
nikos445 14 1 1 15
yliaog 3 2 8 17

Don't miss what's next. Subscribe to Weekly Project News:
Powered by Buttondown, the easiest way to start and grow your newsletter.