Weekly GitHub Report for Pytorch: March 09, 2026 - March 16, 2026 (19:42:50)
Weekly GitHub Report for Pytorch
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is v2.6.0
1.2 Version Information:
Released on January 29, 2025, PyTorch 2.6 introduces significant enhancements including torch.compile support for Python 3.13, a new dynamic compilation control API torch.compiler.set_stance, and improved AOTInductor packaging and ABI compatibility. Notable highlights also include beta-level FP16 support on X86 CPUs, expanded Intel GPU support with simplified installation, and a backward-incompatible security improvement flipping the default of torch.load to weights_only=True, alongside numerous performance optimizations, bug fixes, and deprecations such as the discontinuation of official Anaconda channel packages.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
[ONCALL: DISTRIBUTED] [BOT-TRIAGED] Incompatibility between FSDP and nn.utils.parametrize: This issue describes a compatibility problem between PyTorch's FullyShardedDataParallel (FSDP) and the dynamic use of
nn.utils.parametrizeduring the forward pass, which causes an assertion error due to parameter attribute conflicts when running distributed training. The user provides a minimal reproducible example showing that while the code works on a single GPU, it fails with multiple GPUs under FSDP, and attempts to patch the problem reveal deeper issues related to parameter management and in-place operations in the interaction between FSDP and parametrizations.- The comments discuss potential fixes including modifying internal checks and using the newer
fully_shardAPI instead of the original FSDP, with detailed example scripts provided; the consensus is thatfully_shardworks correctly without workarounds, and the original issue can be closed as the newer approach resolves the problem and meets the user's needs. - Number of comments this week: 9
- The comments discuss potential fixes including modifying internal checks and using the newer
-
[ONCALL: DISTRIBUTED] [BOT-TRIAGED] Way to have DTensor ops go through torch_function rather than go directly through torch_dispatch?: This issue discusses the challenge of having DTensor operations pass through the
__torch_function__override rather than directly through__torch_dispatch__, which prevents intercepting certain linear operations needed for custom autograd functionality in a tensor subclass used for MXFP8 training. The user seeks a way to ensure that when DTensor wraps the subclass, operations likelineardo not bypass__torch_function__, as this disrupts the intended forward and backward coupling required for differentiable quantization and scaled matrix multiplication.- The comments clarify that this behavior is by design because DTensor was intended to decouple forward and backward operations and not support fwd/bwd coupling at the subclass level; suggestions include reordering the wrapping order to have MXTensor over DTensor or migrating to SPMD types, with detailed explanations on how nested subclass dispatch works and why certain ops decompose and bypass
__torch_function__. - Number of comments this week: 7
- The comments clarify that this behavior is by design because DTensor was intended to decouple forward and backward operations and not support fwd/bwd coupling at the subclass level; suggestions include reordering the wrapping order to have MXTensor over DTensor or migrating to SPMD types, with detailed explanations on how nested subclass dispatch works and why certain ops decompose and bypass
-
[ONCALL: DISTRIBUTED] [BOT-TRIAGED] [RFC] Communication primitives for non-contiguous tensors: This issue proposes extending PyTorch's distributed communication primitives to natively support non-contiguous tensors, which are common in modern distributed deep learning workloads such as sequence parallelism, mixture of experts, and the Shampoo optimizer. It outlines a new API design and implementation strategies to enable in-place collective operations on strided tensor layouts, eliminating costly memory copies and improving performance and memory efficiency.
- The comments mainly consist of tagging relevant experts for awareness and discussion, with a brief exchange clarifying the nature of tensor contiguity in column-wise sharding and suggesting the potential need for a utility to better detect contiguous storage underlying non-contiguous tensors.
- Number of comments this week: 7
-
[TRIAGE REVIEW] [MODULE: BINARIES] [MODULE: MACOS] [MODULE: OPENMP] [BOT-TRIAGED] Some macOS wheels say macOS 11.0+ but inside there's libomp.dylib with minos 14.0: This issue concerns a discrepancy in the macOS version compatibility metadata of PyTorch wheels, where some wheels are labeled as supporting macOS 11.0+ but contain a
libomp.dylibbinary built with a minimum OS version of 14.0, potentially causing compatibility problems for users on older macOS versions. The user is seeking clarification on whether this mismatch is intentional or a build error and requests updated wheels that truly support macOS 11.0 to avoid forcing users to switch to alternative package ecosystems like conda.- The comments clarify that the issue was recognized and fixed in the upcoming 2.11.0 release, with no plans to update the 2.10.0 wheels; maintainers discuss the challenges of supporting older macOS versions given Apple's deprecation, and the user expresses appreciation for continued support of macOS 11+ builds while acknowledging the maintainers' constraints.
- Number of comments this week: 6
-
[NEEDS REPRODUCTION] [MODULE: CUDA] [MODULE: MEMORY USAGE] [TRIAGED] [BOT-TRIAGED] tensor cuda memory cannot be release by gc when a tensor hold ref of another: This issue reports that CUDA memory allocated for a tensor is not released by the garbage collector when one tensor holds a reference to another tensor, demonstrated with a minimal example where memory usage remains non-zero after cleanup. The user suspects the problem relates to how tensor
__dict__references are handled during deallocation in the PyTorch codebase and notes the issue appears specifically with Python 3.14, while others have been unable to reproduce it on different setups or Python versions.- The comments discuss attempts to reproduce the issue, with one user suspecting the problem lies in tensor reference handling during deallocation, while others report they cannot reproduce the issue, including on Python 3.14, suggesting the problem may be environment or version specific.
- Number of comments this week: 5
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
As of our latest update, there are no stale issues for the project this week.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 96
Summarized Issues:
- Backend and Compilation Errors: Multiple issues report runtime errors, assertion failures, and crashes related to backend implementations and compilation processes in PyTorch. These include problems with cuDNN backend ignoring parameters causing assertion errors, Inductor backend fusion failures due to stride requirements, device mismatch errors in
torch.vmapwith fake tensors, and bugs intorch.compilesuch as failures withflex_attentionon nested tensors and conv2d layers with large kernels.
- Numerical and Gradient Inconsistencies: Several issues highlight numerical inconsistencies and incorrect gradient computations across different devices or backends. Examples include bitwise mismatches in dynamo decomposition, catastrophic NaNs in CUDA nn.LSTM and Conv2d near float32 limits, and incorrect gradients on MPS backend after batch size changes.
- Memory Management and Garbage Collection Issues: There are reports of CUDA memory not being released properly by the garbage collector in scenarios involving nested tensors and tensor references, leading to persistent memory usage and potential leaks.
- Performance Regressions and Optimizations: Multiple issues document performance regressions in Inductor backend and other components, including slower execution after kernel grid changes, drops in T5 model performance, and slowdowns in dynamic shape inference for various models. Some issues propose performance improvements such as optimizing
torch.library.custom_opdispatch paths and adding performance testing for deterministic mode.
- Testing Failures and Infrastructure Issues: Numerous issues report failing unit tests across various architectures (especially AArch64), broken CI systems, flaky or disabled tests, and requests for improved test skipping mechanisms and better test infrastructure for out-of-tree backends.
- issues/177084, issues/177119, issues/177218, issues/177225, issues/177243, issues/177244, issues/177245, issues/177247, issues/177248, issues/177249, issues/177250, issues/177251, issues/177253, issues/177254, issues/177255, issues/177256, issues/177257, issues/177258, issues/177259, issues/177260, issues/177327, issues/177483
- Distributed and Parallelism Issues: Issues include race conditions when compiling multiple CppExtensions in parallel, device communication failures with NCCL, and requests to extend distributed communication primitives to support non-contiguous tensors and strided memory layouts for advanced parallelism scenarios.
- API and Feature Proposals: Several issues propose new features or API improvements such as a unified RNG API for accelerators, a
validateflag fortorch.multinomialto skip validation kernels, atorch.narrow_scatteroperator, and a multi-level skip mechanism for tests to better support out-of-tree backends.
- Documentation and Usability Issues: Some issues report incorrect or unclear documentation, such as missing head dimension in
scaled_dot_product_attentiondocs, and unclear error messages during tensor shape validation, which can confuse users and hinder debugging.
- Platform and Compatibility Concerns: Issues include macOS wheels labeled for older versions but containing binaries requiring newer OS versions, lack of C++ stacktrace support on Windows, and bugs related to ROCm HIP runtime causing false watchdog failures.
- Export, ONNX, and Serialization Bugs: Problems with export and ONNX serialization include incorrect attribute values in Resize nodes during ONNX export, repeated subgraph lifted constants causing export failures, and traceback resetting in
torch.export.export()making debugging difficult.
- Device and Tensor Operation Bugs: Various bugs affect tensor operations such as
repeat_interleavefailing with unbacked SymInt,torch.conv1daccepting invalid weight shapes on meta backend,CatBackward0rejecting valid gradients on jagged nested tensors, andconjinsideeinsumon MPS device returning incorrect results.
- CUDA and GPU Specific Errors: Issues include runtime errors from unsupported
__CUDA_ARCH__macro usage, misleading error messages when CPU tensors are passed to CUDA JIT APIs, and silent dtype mismatch acceptance intorch.compileforbmmandmatmuloperations.
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 44
Summarized Issues:
- Memory and Performance Issues in Compilation and Export: Several issues report memory errors and performance bottlenecks during compilation and export processes. These include OutOfMemoryError due to excessive kernel fusion in Triton (issue 175250), O(N^2) complexity slowing ONNX export (175872), memory pool check failures causing crashes in reduce-overhead mode (176310), and VRAM regression causing OOM errors in conv3d (177406).
- [issues/175250, issues/175872, issues/176310, issues/177406]
- Torch.compile and Graph Breaks: Multiple issues describe problems with torch.compile causing unexpected graph breaks or incorrect behavior. These include graph breaks caused by a.new_tensor usage (176067), torch.compile incorrectly invoking property setters/getters (176596, 176599), and torch.compile mishandling Tensor subclasses overriding torch_function (176679, 176686).
- [issues/176067, issues/176596, issues/176599, issues/176679, issues/176686]
- Triton Kernel and Backend Failures: Several issues highlight bugs and crashes in Triton kernels and related backend compilation. These include segfaults on RTX PRO 6000 Blackwell GPUs due to invalid code generation (176426), compilation errors from pow(float32, int64) in Triton codegen (177131), and Inductor subprocess compilation failures on Triton HEAD (177357).
- [issues/176426, issues/177131, issues/177357]
- Regression and Accuracy Failures in TorchInductor Benchmarks: There are regressions in accuracy checks for TorchInductor benchmarks on H100 GPUs between PyTorch 2.10 and 2.11 release candidates. This affects the openai/whisper-tiny model (176411) and the TIMM vit_base_patch14_dinov2.lvd142m model (176417), indicating inference and training accuracy validation failures.
- [issues/176411, issues/176417]
- NCCL and CUDA Backend Issues: Issues related to CUDA and NCCL backends include thread safety concerns in NCCL SymmMem (176419), ImportError due to undefined symbol 'ncclAlltoAll' in libtorch_cuda.so (176829), and assertion errors in kernel scheduler loop reordering after fusion (176591).
- [issues/176419, issues/176829, issues/176591]
- MPS Backend Limitations and Bugs: Several issues report MPS backend problems such as incorrect output shapes in scaled_dot_product_attention (176767), unsupported operator errors when exporting TransformerEncoderLayer to ONNX (177013), and unimplemented aten::linalg_qr.out operator blocking workflows on Apple Silicon (177370).
- [issues/176767, issues/177013, issues/177370]
- Serialization and Export Bugs: Bugs affecting serialization and export include failure to save torch.float8_e8m0fnu data type (176621) and ModuleNotFoundError for torch._inductor.async_compile causing build failures in vllm (176581).
- [issues/176621, issues/176581]
- Inconsistent Behavior and Numerical Issues: Issues report inconsistent initialization results from reset_parameters depending on memory format (175982), nondeterministic outputs from torch.nn.functional.pad with circular mode (176254), and discrepancies in torch.erfinv output compared to PaddlePaddle (176373).
- [issues/175982, issues/176254, issues/176373]
- Error Handling and Exception Bugs: There are bugs in exception handling such as Dynamo raising graph breaks instead of TypeError for non-BaseException values (176787), and copy.deepcopy failing on LeafSpec due to missing attribute initialization (177045).
- [issues/176787, issues/177045]
- Documentation and Usability Improvements: Some issues highlight documentation gaps and usability improvements, including the lack of a minimal example for MuonWithAuxAdam optimizer replacement (177029), and clarifications needed in macOS naming and installation instructions (176907).
- [issues/177029, issues/176907]
- Other Compilation and Runtime Errors: Additional issues include C++ compilation failures due to cpp_wrapper incompatibility with CUTLASS backend (176080), illegal instruction errors on Raspberry Pi 4 fixed in nightly (176993), and assertion errors in Helion kernel test due to unexpected None variable range (177394).
- [issues/176080, issues/176993, issues/177394]
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 263
Key Open Pull Requests
1. Use wrapper function for increase accuracy of golden reference.: This pull request introduces and refactors a wrapper function using IEEE precision to improve the accuracy of golden reference computations in SDPA tests, along with adding a new CUDA backend precision setting and updating related tests and documentation to ensure consistent and precise floating-point behavior.
- URL: pull/177454
- Associated Commits: bac1d, 29759, 58ffc, c2692, c0811, e9e46, dfb2a, 5c54e, ca17a, 76396, eb78e, 6530f, 57f4e, b8ade, cad06, ea873, e9253, 94428, 218be, f2115, 3752a, f2df3, 72710, 9a373, 32ea6, 96a68, a8ceb, e7317, 2f799, 9e32d, da619, 79276, d3b65, 6280f, e6679, e9519
2. [Helion + torch.compile] Add unit test for ExternalTritonTemplateKernel fusion: This pull request adds a unit test for the fusion of ExternalTritonTemplateKernel in the Helion backend integrated with torch.compile, specifically testing prologue and epilogue fusion scenarios using a mock external template buffer to verify the correct setup and handling of fusion hooks and extra inputs in the kernel.
- URL: pull/177065
- Associated Commits: 4dede, 90a70, 297ef, fc8dd, bee95, 8d16a, 067cb, ad866, 75368, aec2a, 9e91b, 0a268, e3da9, 54db4, ac045, 92077, 44dcf, 10743, 62f03, 1f7ba, fdd61, cc6e4, 1dd01, 47382, 4ec4a, df941, a3e53, fc15a, 28ce0, 7f90b, 2e3bc, d2624, 2b4ff
3. [DO NOT MERGE][xpu] Test FP8 blocked scaling: This pull request is a test integration that merges all three related pull requests for XPU scaled matrix multiplication (scaled_mm) with FP8 blockwise scaling, consolidating the changes for continuous integration testing purposes.
- URL: pull/177362
- Associated Commits: 03782, f7b0e, de201, 66c85, 2fde3, ee94b, 6f782, 8b963, 9399e, 810a1, 20577, d60ba, 7ed2e, c8418, 07c67, 2aee4, 16a4f, ae03c, 333c7
Other Open Pull Requests
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 350
Key Closed Pull Requests
1. [DTensor] Register single-dim strategies for categorized pointwise ops: This pull request switches categorized pointwise operations in the DTensor component from using the register_op_strategy method to register_single_dim_strategy by leveraging newly introduced infrastructure such as category lists, rule constants, and factory functions, while retaining the old registrations as fallbacks for certain variants to be migrated later.
- URL: pull/175795
- Associated Commits: 55871, 327d3, b1b59, bd5d6, 17dd2, b109c, dd0d1, 7b3b5, 382ee, d607f, bbdf4, e34e2, 9b520, 44578, 48e09, 607cf, f604c, 995c3, 56c71, eabbc, 0ca22, b49c7, bd517, 4f419, aa766, a3b39, 43f41
- Associated Commits: 55871, 327d3, b1b59, bd5d6, 17dd2, b109c, dd0d1, 7b3b5, 382ee, d607f, bbdf4, e34e2, 9b520, 44578, 48e09, 607cf, f604c, 995c3, 56c71, eabbc, 0ca22, b49c7, bd517, 4f419, aa766, a3b39, 43f41
2. [Helion + torch.compile] Refactor template codegen pipeline for extensibility: This pull request proposes a comprehensive refactor of the template code generation pipeline in PyTorch to improve extensibility by restructuring the code so that external template backends like Helion can participate in epilogue/prologue fusion without duplicating Triton-specific logic, achieved by moving epilogue/prologue codegen into kernel subclasses, renaming and redefining key methods for clearer extension points, adding indent-aware hook substitution, and extracting reusable helper functions from monolithic methods to enable easier customization and reuse by external kernel classes.
- URL: pull/177064
- Associated Commits: 70263, 996b6, 2f9e4, 2c41c, 909d1, e045f, e9297, 40a85, 059a3, 54b2b, 38c69, 319b3, 67a48, d1edb, f601e, 076e2, e4948, 62735, 9494d, 1a1e3, d3ffe, 2c1c8, f9220, b81a9, 6f9dc, 8384a, 72eb2
- Associated Commits: 70263, 996b6, 2f9e4, 2c41c, 909d1, e045f, e9297, 40a85, 059a3, 54b2b, 38c69, 319b3, 67a48, d1edb, f601e, 076e2, e4948, 62735, 9494d, 1a1e3, d3ffe, 2c1c8, f9220, b81a9, 6f9dc, 8384a, 72eb2
3. [WIP] Add SVE128 support: This pull request is a work-in-progress aimed at adding support for SVE128 (Scalable Vector Extension 128-bit) to the PyTorch project.
- URL: pull/175878
Other Closed Pull Requests
- NCCLDevCommManager refactor: This pull request refactors the
NCCLDevCommManagerto improve its API design by transforming it into a pure registry that no longer creates device communicators by default. It introduces separate maps for host and device communicators, enhances error handling, renames registration methods, adds documentation, and cleans up related code for better consistency and usability. - pull/177380
- Dynamo enum and dictionary key handling improvements: These pull requests address issues in Dynamo related to enum membership checks and dictionary key lookups. One adds support for
__contains__on Flag enum variables to avoid graph breaks, while the other fixes dictionary key lookups usingtorch.Sizeobjects containingTensorVariableelements by enhancing hashing and equality checks. - pull/177440, pull/177312
- Profiler activity filtering enhancement: This pull request introduces fine-grained filtering of individual activity types within profiler activity groups in
torch.profiler.profile. It allows users to specify subsets of activities to be collected, improving profiling flexibility across multiple device types including CUDA, XPU, MTIA, HPU, and PrivateUse1. - pull/176351
- Varlen attention feature additions: These pull requests add new features to the varlen attention mechanism, including a
num_splitsparameter to disable splitting of key-value pairs for batch invariance and apage_tablefeature specific to FA3. They also ensure compatibility by throwing errors if thepage_tableis used with older versions. - pull/176905, pull/175924
- Triton kernel and hashing improvements: These pull requests enhance Triton kernel support by adding host-side TMA descriptor support in lazy compilation and improving the Triton hashing mechanism to include external library hashes for better cache key accuracy.
- pull/175548, pull/175674
- Dynamo virtual tensor and id() support: These pull requests improve Dynamo by preventing realization of virtual tensors for the first argument and enabling support for the
id()function on container variable types via a compile-time-only FakeIDVariable. This allows tracing through functions likecopy.deepcopywithout stale id issues. - pull/177078, pull/177443
- Graph selection and dynamic shapes stability in Dynamo: This pull request improves the stability of graph selection in Dynamo's automatic dynamic shapes feature by fixing bugs related to exclusion guards for tensor sizes and scalar integers. It refines exclusion logic, renames configuration flags for clarity, and validates changes with tests.
- pull/175881
- Native operations framework addition: This pull request adds a native operations framework to PyTorch, including utilities for package/version checking, DSL-specific utilities for Triton and CuteDSL, a general registry for native operations, and initial documentation on adding DSLs and native ops.
- pull/176280
- WrapperUserFunctionVariable inheritance fix: This pull request fixes an issue in Dynamo by having
WrapperUserFunctionVariableinherit fromBaseUserFunctionVariableinstead ofVariableTracker. This ensures proper handling of special attributes and compatibility withfunctools.wrapswhen tracing lru_cache-wrapped functions. - pull/176934
- NCCL Symmetric Memory rendezvous optimization: This pull request improves tensor-to-allocation lookup performance by introducing a first-level cache and using
cuMemGetAddressRange()for efficient CUDA allocation retrieval. It significantly reduces overhead from linear scans, achieving about a 135× speedup for large allocations. - pull/176744
- Regression tests for SAC_IGNORED_OPS annotation: This pull request adds regression tests to ensure that operations like detach correctly receive the
PREFER_RECOMPUTEannotation without leaking from preceding operations. It usesallow_in_graphto maintain test stability despite decomposition changes. - pull/176923
- MPS backend cdouble tensor creation error: This pull request introduces an error when attempting to create a
torch.cdoubletensor on the MPS backend, which does not support this data type. It aligns MPS behavior with thedoubletype and includes related test adjustments. - pull/176925
- Torch.compile crash fix with TorchFunctionMode: This pull request fixes a crash in
torch.compilecaused byTorchFunctionModewith mutable state by moving mode stack operations tocompile_inner. This ensures modes are off the C stack during compilation, preventing unintended dispatches and guard verification failures. - pull/177095
- Graph partition and cudagraphs integration fix: This pull request fixes routing of CPU tensors through CUDA partitions to prevent device mismatches causing segmentation faults. It eliminates redundant CUDA-to-CPU-to-CUDA transfers by selectively inserting
device_putoperations and preserving CPU tensor references for backward passes. - pull/176164
- Pre-gradient transformation pass caching improvement: This pull request modifies the PyTorch compilation process by moving pre-gradient transformation passes to occur after the AOTAutograd cache lookup. This ensures transformations are included in cached artifacts and only run on cache misses, avoiding unnecessary work on cache hits.
- pull/176340
- Triton backend pow operation dtype propagation fix: This pull request fixes incorrect dtype propagation for scalar integer exponents in the Triton backend's
powoperation. It makes propagation scalar- and sign-aware, ensures exact integer computation for non-negative exponents, preserves backend-specific constant lowering, and adds regressions to prevent silent rounding errors. - pull/177272
- Code quality and maintainability improvements (unmerged): This pull request includes multiple commits to improve code quality by adding explicit types, fixing lint and import order issues, preserving validation for empty model selectors, and addressing a formatter regression, but it was not merged.
- pull/176923
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
| Contributor | Commits | Pull Requests | Issues | Comments |
|---|---|---|---|---|
| anijain2305 | 277 | 22 | 0 | 2 |
| anshul-si | 188 | 35 | 0 | 1 |
| malfet | 160 | 11 | 0 | 21 |
| mlazos | 133 | 28 | 2 | 4 |
| laithsakka | 126 | 10 | 0 | 6 |
| pianpwk | 100 | 13 | 1 | 14 |
| Skylion007 | 32 | 12 | 0 | 72 |
| NikhilAPatel | 99 | 12 | 0 | 0 |
| soulitzer | 95 | 8 | 0 | 2 |
| yf225 | 89 | 13 | 0 | 2 |