Weekly GitHub Report for Pytorch - 2024-07-10 15:16:21
Weekly GitHub Report for Pytorch
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
I. Issues
1.1 Open Issues
Open Issues This Week: 73
Summarized Issues:
- Compilation Errors in PyTorch: Compilation errors in PyTorch can arise from various issues, such as the
sort
operation throwing an error due tott.broadcast
requiring the same encoding for all operands and results. Another example is a prolonged compilation time for packing a 2D int4 tensor into an int32 tensor usingtorch.compile
. Additionally, aCppCompileError
can occur when running atorch.compile
example, potentially due to a non-pip build process.
- Graph Breaks and CUDA Errors: Enabling
TorchFunctionMode
within a compiled region usingtorch.compile
can cause graph breaks. Additionally, running a PyTorch script with multiple GPUs using the NCCL backend can result in ancclUnhandledCudaError
, leading to a CUDA initialization error. Another issue involves aRuntimeError
related to a CUDA error (CUBLAS_STATUS_EXECUTION_FAILED
) during the execution of thecublasGemmEx
function.
- ONNX Export Issues: Exporting PyTorch models to ONNX format can fail due to various bugs. For instance, a Group Normalization (GN) layer with a two-dimensional input can cause an "IndexError: list index out of range." Another issue involves a
torch.onnx.OnnxExporterError
during model export usingdynamo_export
. Additionally, exporting a model with a constant tensor usingtorch.no_grad()
can result in aSpecViolationError
.
- Model Export and Serialization: Issues with model export and serialization include a failure to install PyTorch on Alpine Linux due to conflicting dependencies. Another problem is the inability to serialize wrapper subclasses with custom Python dispatch implementations for size and stride information. Additionally, the ONNX serialization process can generate illegal identifiers that do not conform to C90 identifier syntax rules.
- Inductor Backend Bugs: The PyTorch Inductor backend has several bugs, such as missing nodes for backward graphs, specifically the
gelu_backward
nodes. Another issue involves incorrect strides for output tensors generated by thetorch.empty_strided
function. Additionally, deferred runtime asserts can incorrectly trigger assertion failures during the backward pass in Inductor code generation.
- CUDA and GPU Memory Issues: Using
pin_memory()
in subprocesses within a custom data transformation pipeline can lead to CUDA out-of-memory errors. Another issue is that thetorch.cuda.caching_allocator_delete
function does not free up GPU memory until the notebook kernel is restarted. Additionally, a regression in the inference pass rate for thepyhpc_turbulent_kinetic_energy
benchmark occurs when using theCG_freezing
configuration with theinductor
backend on a CUDA device.
- Dynamic Shapes and Symbolic Sizes: Issues with dynamic shapes and symbolic sizes include a runtime error when saving inputs to a graph with dynamic shapes using
torch.save
. Another problem is a numerical error when using thefuncol.reduce_scatter
function on a non-contiguous tensor. Additionally, a discrepancy in the output of a function when running in eager mode versus compiled mode withcapture_dynamic_output_shape_ops=True
can occur.
- Documentation and Usability: Several issues highlight the need for improved documentation and usability in PyTorch. For example, the
normal()
function lacks documentation for several arguments. Another issue is the lack of support for thetorch.nn.functional.conv_transpose3d
function on MPS (Apple Silicon). Additionally, there is a request for a feature to simplify exported FX models and provide tools to count FLOPs and memory usage.
- Feature Requests and Enhancements: Feature requests and enhancements include developing a baseline ONNX interpreter or executor using Python/PyTorch. Another request is for a
shutdown()
method for theProcessGroupGloo
distributed backend. Additionally, there is a proposal to add an Exponential Moving Average (EMA) module wrapper to facilitate smoother model weight updates during training.
1.2 Top 5 Active Issues:
We consider active issues to be issues that have generated much discussion in the issue's comments.
-
How to get stream operators in custom backend compiler ?: This issue is about a user encountering a problem where stream-related operations are missing from the fx graph when using a custom backend compiler in PyTorch. The user seeks guidance on how to retain these stream operations in the fx graph after the
aot_module_simplified
process.- The comments discuss the user's need for stream operations in custom compilers, potential solutions, and the complexity of supporting user-specified streams. The user clarifies their use case involving NV GPUs and NPUs, and the conversation explores the feasibility of preserving stream information through the lowering stack, especially for inference purposes.
- Number of comments: 7
-
"device >= 0 && device < num_gpus INTERNAL ASSERT FAILED" with torch 2.5.0.dev20240705+cu121 on 2 GPU NVIDIA-A100-SXM4-80GB-MIG-3g.40gb: This issue reports a bug where the error
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED
occurs when using PyTorch nightly (2.5.0.dev20240705+cu121) on a NVIDIA-A100-SXM4-80GB-MIG-3g.40gb with 2 GPUs. The error was expected to be resolved in version 2.4.0rc1 but persists, particularly when running thecollect_env
script.- The comments discuss that running the
collect_env
script alone works fine, but integrating it into a training script triggers the error. A minimal reproducible example is provided, and it is suggested that the fix might not have propagated to the nightly builds yet. Validation links for 2.4 rc and nightly builds are shared, and it is confirmed that the issue is specific to MIG configurations, as it works fine on a non-MIG A100 GPU. - Number of comments: 6
- The comments discuss that running the
-
Strange bug: cdist segfaults on M1 Max ARM, for certain size tensors, if numpy is loaded first. Happens in 2.3.1 but not 2.2.2: This issue describes a segmentation fault occurring with the
cdist
function on an M1 Max ARM processor when certain size tensors are used, and numpy is imported before torch. The problem is specific to PyTorch version 2.3.1 and does not occur in version 2.2.2.- The comments discuss that downgrading to version 2.2.2 resolves the issue, suggest that the problem might be related to numpy's handling of shared libraries, and recommend importing torch before numpy. There is hope that the issue is fixed in the upcoming 2.4 release, and testing with the 2.4 release candidate confirms that the problem does not occur.
- Number of comments: 5
-
capture_dynamic_output_shape_ops=True changing expected output between eager and compiled versions: This issue describes a discrepancy in the output of a PyTorch function when run in eager mode versus compiled mode with the
capture_dynamic_output_shape_ops=True
configuration. The provided code snippet demonstrates that the function, which maps elements of one tensor to indices of another tensor, produces different results depending on whether the configuration is enabled or not.- The comments discuss the inability to reproduce the issue with certain versions, observations about the issue occurring on both CPU and CUDA, and a detailed analysis of the problem being related to non-contiguous tensor strides in the
argwhere
function. The final comment suggests that the meta information forargwhere
might be incorrect. - Number of comments: 5
- The comments discuss the inability to reproduce the issue with certain versions, observations about the issue occurring on both CPU and CUDA, and a detailed analysis of the problem being related to non-contiguous tensor strides in the
-
Pin memory in subprocess: This issue discusses the challenges of using
pin_memory()
in subprocesses within a custom data transforms pipeline for parallel data processing, which leads to CUDA Out of Memory (OOM) errors. The user is seeking a way to pin tensors without relying on PyTorch's pinned memory caching allocator to avoid increased GPU wait times when pinning is done in the main process.- The comments discuss the limitations of the generic
DataLoader
for the user's complex data pipeline, including dynamic batching, sophisticated data transforms, and ordered data processing. The user explains their custom solution and the need for pinning memory in subprocesses, while others suggest potential workarounds and inquire about the overhead of using multiprocess queues. - Number of comments: 4
- The comments discuss the limitations of the generic
1.3 Top 5 Quiet Issues:
We consider quiet issues to be issues that have been opened in this project for the longest time. The team should work together to get these issues resolved and closed as soon as possible.
-
compiling sort throws
error: 'tt.broadcast' op requires the same encoding for all operands and results
: This issue involves a compilation error encountered when running a specific PyTorch script, which results in the error message:'tt.broadcast' op requires the same encoding for all operands and results
. The problem is suspected to be related to the sort code generation within the script, and the user has provided a detailed code snippet to help reproduce and diagnose the issue.- Open for 6 days, 08 hours, 24 minutes
-
torch.compile
graph breaks onTorchFunctionMode
: This issue highlights a problem where thetorch.compile
function experiences graph breaks whenTorchFunctionMode
is enabled within a compiled region. The user suggests that it would be beneficial to allowTorchFunctionMode
to operate without causing these interruptions, aiming to improve the functionality and performance of compiled code.- Open for 6 days, 07 hours, 53 minutes
-
[ONNX]: Fail to export onnx when GroupNorm input shape rank=2: This issue pertains to a bug in the PyTorch library where the export to ONNX format fails when the Group Normalization (GN) layer receives two-dimensional input shapes. The error occurs due to an index out of range problem in the symbolic helper function during the unsqueeze operation, as indicated by the traceback provided.
- Open for 6 days, 06 hours, 55 minutes
-
ncclUnhandledCudaError: Call to CUDA function failed. : This issue pertains to a bug encountered when running a PyTorch script using the NCCL backend for distributed data parallelism, resulting in an unhandled CUDA error. The error occurs during the initialization of the
DistributedDataParallel
model, specifically when verifying parameter shapes across processes, and is accompanied by a traceback indicating a CUDA initialization failure.- Open for 6 days, 03 hours, 56 minutes
-
[feature request] [discussion] Baseline ONNX interpreter / executor in python / PyTorch : This issue is a feature request and discussion about developing a baseline ONNX interpreter or executor using Python and PyTorch, aimed at facilitating basic performance testing of third-party ONNX files across different backends. The motivation behind this request is to leverage PyTorch's rapid evolution and flexibility for server-side inference, especially when the original model is provided as an ONNX file and the source code is unavailable or difficult to run.
- Open for 6 days, 03 hours, 53 minutes
1.4 Closed Issues
Closed Issues This Week: 27
Average Issue Close Time (This Week): 1.98 days
Average Issue Close Time (All Time): None
Summarized Issues:
- Type Mismatch Errors in PyTorch Functions: Issues related to type mismatch errors in PyTorch functions often arise when combining
torch.compile
,TorchFunctionMode
, andvmap
. These errors are typically due to improper handling of scalar tensors or specific operators likeaten::index
. Such issues can lead to assertion errors and runtime failures, impacting the usability of custom functions and compiled models.
- Memory Management Issues: Memory management issues in PyTorch can manifest as memory leaks or out-of-memory errors. These problems occur during model training or exporting models to ONNX format, leading to increased peak memory usage or exceeding available memory limits. Such issues are critical as they can halt model training or deployment processes.
- Errors with Empty Tensors: Several PyTorch functions, including
argmin()
,argmax()
,kthvalue()
,topk()
,aminmax()
,amin()
, andamax()
, throw errors when applied to empty tensors. These errors occur even when dimensions are specified, indicating a need for better handling of edge cases involving empty tensors.
- Build and CI Pipeline Failures: Failures in the build process and CI pipeline are often due to issues like missing license files, unsupported signals, or linking errors. These problems can block updates and testing, necessitating workarounds or fixes to ensure continuous integration and deployment.
- Functionality and Performance Issues: Issues related to functionality and performance include unsupported functions, performance drops, and incorrect values returned by specific functions. These issues can affect model training, exporting, and the accuracy of results, requiring prioritization and fixes to maintain performance and correctness.
- Documentation and Usability Issues: Documentation errors and usability issues can lead to confusion and incorrect usage of PyTorch functions. These issues include incorrect terms in documentation, unexpected behavior of functions, and lack of intuitive understanding of parameters, which need to be addressed to improve user experience.
- Support for New Data Types: Adding support for new data types, such as converting 16-bit TIFF images into PyTorch tensors, is crucial for maintaining data precision in research. The lack of native support for certain data types can hinder the adoption of PyTorch in specific research areas.
- Miscellaneous Issues: Other issues include runtime errors with specific optimizers, internal assertion failures, and user queries about memory addresses in tensors. These issues, while varied, highlight the need for robust error handling and clear documentation to assist users in troubleshooting.
1.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open issues within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open issues from the past week.
II. Pull Requests
2.1 Open Pull Requests
Open Pull Requests This Week: 100
Pull Requests:
- Continuous Integration (CI) Updates: This topic includes updates to the continuous integration (CI) system for the PyTorch project. One pull request aims to update the CI pin for the Halide backend. Another pull request addresses issues caused by a recent PyTorch update, including fixing compilation errors and resolving a failure in the inductor case due to the CUDA bias implementation.
- AutoHeuristic and Kernel Choice: This topic covers enhancements to the AutoHeuristic feature for kernel choice selection. One pull request adds support for global feedback in the AutoHeuristic feature and enables it for the mixed_mm module. Another pull request introduces a script to collect data for mixed_mm to facilitate heuristic learning.
- Pyre Mode Headers: This topic involves adding missing Pyre mode headers to the PyTorch project. The pull request is part of a larger batch update to ensure consistency and completeness in the project's type annotations.
- Compiler Flags and Codegen: This topic includes updates related to compiler flags and code generation. One pull request introduces
TORCHINDUCTOR_ADDITIONAL_COMPILER_FLAGS
to allow extra compiler flags. Another pull request forces the inlining of the codegen function in the C++ code for the inductor component.
- Guard and Cache Mechanisms: This topic covers improvements to guard and cache mechanisms in the PyTorch project. One pull request aims to cache the
attr_proxy
for thenn_module
attribute. Another pull request reduces Dynamo guard overhead by replacing empty OrderedDict guards with a boolean guard.
- DTensor Module Enhancements: This topic includes several enhancements to the dtensor module. Pull requests cover moving the Bernoulli operation to the operation strategy, modifying the
slice_backward
function, and simplifying strategy generation paths for flash and efficient attention operations.
- Documentation and API Enhancements: This topic includes updates to documentation and API enhancements. One pull request enhances the
from_local
API with detailed documentation and comments. Another pull request adds documentation for theCustomOpDef.set_kernel_enabled
function.
- Inductor Component Updates: This topic covers updates to the Inductor component. Pull requests include enabling vectorization of the
load_seed
andrandn
functions, and adding comments for the.min_order
and.max_order
attributes.
- Error Handling and Debugging: This topic includes improvements to error handling and debugging. One pull request addresses an issue with
inference_mode
andtorch.autograd.functional.jacobian
. Another pull request improves error messaging for outdated Triton versions.
- XPU (Intel GPU) Support: This topic covers the addition of XPU support in various functions. Pull requests add XPU support to the
cdist_impl
function and enable unit tests in thetest/xpu/cc
directory.
- CUDA Graphs and Conditional Nodes: This topic includes support for capturing
torch.cond
andtorch.while_loop
in a single CUDA graph. The pull request leverages new features in CUDA 12.4 to enable data-dependent control flow on the GPU.
- Static Address and AOT Compilation: This topic covers updates related to static address marking and AOT compilation. Pull requests mark all buffers and parameters of an NNModule as static and propagate buffer and parameter indices through AOT compilation.
- Profiling and Performance: This topic includes enhancements to profiling and performance. One pull request adds NVTX annotations around training phases and buffer computations. Another pull request optimizes the guard mechanism for small tuples.
- Backward Compilation and Eager Mode: This topic addresses issues with backward compilation and eager mode. Pull requests add a feature called
eager_compile_backwards_failure
and identify ongoing issues with eager backward compilation.
- Optional Types and String Views: This topic includes updates to optional types and string views. One pull request replaces instances of
c10::optional
withstd::optional
. Another pull request makesc10::string_view
an alias ofstd::string_view
.
2.2 Closed Pull Requests
Closed Pull Requests This Week: 0
Summarized Pull Requests:
As of our latest update, there are no closed pull requests for the project this week.
2.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open pull requests within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open pull requests from the past week.
III. Commits
3.1 Commits
Commits This Week: 101
Summarized Commits:
- Bug Fixes and Stability Improvements: Several commits address various bugs and stability issues, such as preventing crashes on Apple devices with Metal API validation, fixing NaN issues in flash attention, and resolving segmentation faults with safer initialization methods. These changes ensure the robustness and reliability of the codebase.
- Testing Enhancements: Multiple commits focus on improving the testing framework, including unskipping tests on the ROCm backend, marking tests as slow on Windows, and adding new tests for type promotion and mutation validation. These efforts enhance the test coverage and reliability of the project.
- Header File Standardization: A series of commits replace the custom header
<c10/util/Optional.h>
with the standard C++ header<optional>
, reflecting a move towards standardization and modern C++ practices.
- Type and Data Handling: Commits address issues related to type promotion, handling Python numbers in
aten._to_copy
, and adding support for new data types like int4 and BF16. These changes improve the flexibility and correctness of data handling in the project.
- Performance Optimizations: Several commits introduce performance optimizations, such as reducing parameters for no momentum fused SGD, optimizing tensor stride handling, and enhancing the performance of the BF16
Softmax
operation. These optimizations aim to improve the efficiency of the codebase.
- API and Functionality Additions: New APIs and functionalities are introduced, including
capture_triton
for tracing triton kernels,torch.library.Library._register_torch_dispatch_rule
for custom dispatch rules, and support for autoloading device extensions. These additions expand the capabilities and usability of the project.
- Code Refactoring and Cleanup: Commits involve refactoring and cleaning up the codebase, such as simplifying
c10::string_view
, removing unused Caffe2 code, and cleaning up preprocessor conditionals. These efforts improve code maintainability and readability.
- Documentation and Logging Improvements: Enhancements to documentation and logging include removing outdated Caffe2 documentation, adding logging for model IDs in TorchScript, and improving logging for recompiles. These changes help developers understand and debug the code more effectively.
- Compatibility and Build Updates: Updates to build configurations and compatibility include adding support for Python 3.10 with CUDA 12.1, updating the CMake configuration, and introducing workflows for Python 3.13 binaries. These updates ensure the project remains compatible with various environments and dependencies.
- Memory and Resource Management: Commits address memory and resource management issues, such as fixing memory snapshot annotations, ensuring correct tensor allocation for checkpoint optimizer states, and improving the handling of local buffers in the Inductor CPP backend. These changes optimize resource usage and prevent memory-related issues.
- Graph and Tensor Operations: Enhancements to graph and tensor operations include decompositions for copy variants of view operations, schema inference for custom operations, and support for tensor stride in execution traces. These improvements enhance the functionality and performance of graph and tensor manipulations.
- Quantization and Precision Handling: Commits focus on quantization and precision handling, such as enabling fusion of qlinear post operations, addressing quantization parameter mismatches, and supporting int4 kernels on AMD GPUs. These changes improve the accuracy and efficiency of quantized operations.
- Debugging and Profiling Tools: Enhancements to debugging and profiling tools include adding deviceMesh information to DTensor operation logs, improving the
print_readable
function, and enhancing the logging for symbolic dimensions. These tools help developers diagnose and optimize the code more effectively.
- Error Handling and Validation: Improvements to error handling and validation include fixing issues with import statements for pytest compatibility, addressing runtime errors with checkpoint optimizer states, and ensuring proper handling of symbolic dimensions. These changes enhance the robustness and correctness of the code.
- Custom Operation Support: New features for custom operations include defining interactions between
torch_dispatch
classes and custom operations, and introducing a schema inference for custom operations. These features enable more flexible and powerful custom operations.
- Thread and Process Management: Updates to thread and process management include naming threads in thread pools for better debugging and modifying the elastic store barrier operation to reduce network round trips. These changes improve the efficiency and debuggability of multi-threaded and distributed operations.
- Miscellaneous Enhancements: Various other enhancements include adding support for new data types, improving the naming of subgraph inputs, and introducing a new 'scale' keyword argument to the FlexAttention module. These changes add new features and improve the overall functionality of the project.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, or created at least 1 pull request in the past month.
Contributor | Commits | Pull Requests | Issues |
---|---|---|---|
hyperkai | 0 | 0 | 13 |
rzou | 10 | 0 | 0 |
cyy | 9 | 0 | 0 |
zou3519 | 0 | 0 | 8 |
anijain2305 | 0 | 7 | 0 |
Chillee | 0 | 1 | 5 |
ezyang | 0 | 4 | 2 |
cyyever | 0 | 5 | 0 |
atalman | 2 | 0 | 2 |
leslie-fang-intel | 3 | 1 | 0 |
AlnisM | 0 | 4 | 0 |
wanchaol | 0 | 4 | 0 |
mlazos | 0 | 4 | 0 |
Catherine Lee | 3 | 0 | 0 |
Joel Schlosser | 3 | 0 | 0 |
chilli | 3 | 0 | 0 |
Animesh Jain | 3 | 0 | 0 |
williamwen42 | 0 | 3 | 0 |
clee2000 | 0 | 2 | 1 |
yushangdi | 0 | 2 | 1 |
shunting314 | 0 | 0 | 3 |
Li-Huai (Allan) Lin | 2 | 0 | 0 |
Xuehai Pan | 2 | 0 | 0 |
Aaron Enye Shi | 2 | 0 | 0 |
Tristan Rice | 2 | 0 | 0 |
Tianyi Tao | 2 | 0 | 0 |
Jerry Mannil | 2 | 0 | 0 |
Shangdi Yu | 2 | 0 | 0 |
eqy | 1 | 1 | 0 |
Xu Han | 2 | 0 | 0 |
Will Constable | 2 | 0 | 0 |
Anshul Sinha | 2 | 0 | 0 |
helloguo | 0 | 2 | 0 |
yf225 | 0 | 2 | 0 |
soulitzer | 0 | 2 | 0 |
wconstab | 0 | 2 | 0 |
izaitsevfb | 0 | 2 | 0 |
isuruf | 0 | 2 | 0 |
tursom | 0 | 1 | 1 |
XuehaiPan | 0 | 2 | 0 |
jataylo | 0 | 2 | 0 |
bdhirsh | 0 | 1 | 1 |
pianpwk | 0 | 2 | 0 |
fegin | 0 | 2 | 0 |
ZhiweiYan-96 | 0 | 2 | 0 |
oraluben | 0 | 1 | 1 |
wbigat | 0 | 0 | 2 |
stswidwinski | 0 | 0 | 2 |
Michael Eisel | 1 | 0 | 0 |
Andres Lugo-Reyes | 1 | 0 | 0 |
Jiang, Yanbing | 1 | 0 | 0 |
Valentine233 | 1 | 0 | 0 |
Tom Ritchford | 1 | 0 | 0 |
awayzjj | 1 | 0 | 0 |
Zhengxu Chen | 1 | 0 | 0 |
Riley Dulin | 1 | 0 | 0 |
Chen Lai | 1 | 0 | 0 |
Peter Bell | 1 | 0 | 0 |
Yifu Wang | 1 | 0 | 0 |
Yidi Wu | 1 | 0 | 0 |
Eddie Yan | 1 | 0 | 0 |
Richard Zou | 1 | 0 | 0 |
milesial | 1 | 0 | 0 |
Yichen Yan | 1 | 0 | 0 |
Valentin Andrei | 1 | 0 | 0 |
Yuanhao Ji | 1 | 0 | 0 |
Sam Larsen | 1 | 0 | 0 |
Sheng Fu | 1 | 0 | 0 |
Edward Z. Yang | 1 | 0 | 0 |
Jane Xu | 1 | 0 | 0 |
Jerry Zhang | 1 | 0 | 0 |
Huy Do | 1 | 0 | 0 |
Yueming Hao | 1 | 0 | 0 |
Andrey Talman | 1 | 0 | 0 |
Yang Chen | 1 | 0 | 0 |
Xia, Weiwen | 1 | 0 | 0 |
Jeeja | 1 | 0 | 0 |
James Wu | 1 | 0 | 0 |
Michael Lazos | 1 | 0 | 0 |
Jithun Nair | 1 | 0 | 0 |
William Wen | 1 | 0 | 0 |
Jason Ansel | 1 | 0 | 0 |
Abhinav Podili | 1 | 0 | 0 |
Edan Tessel Sneh | 1 | 0 | 0 |
Feny Patel | 1 | 0 | 0 |
Alnis Murtovi | 1 | 0 | 0 |
Simon Fan | 1 | 0 | 0 |
Sijia Chen | 1 | 0 | 0 |
Pian Pawakapan | 1 | 0 | 0 |
Simon Mahns | 1 | 0 | 0 |
peaceorwell | 1 | 0 | 0 |
Shuo Ding | 1 | 0 | 0 |
jansel | 0 | 1 | 0 |
connernilsen | 0 | 1 | 0 |
ydwu4 | 0 | 1 | 0 |
sijiac | 0 | 1 | 0 |
mori360 | 0 | 1 | 0 |
tianyeeT | 0 | 1 | 0 |
guangyey | 0 | 1 | 0 |
Stonepia | 0 | 1 | 0 |
TiRune | 0 | 1 | 0 |
datagero | 0 | 1 | 0 |
fengyuan14 | 0 | 1 | 0 |
dnikolaev-amd | 0 | 1 | 0 |
fenypatel99 | 0 | 1 | 0 |
cccclai | 0 | 1 | 0 |
mwlon | 0 | 1 | 0 |
bertmaher | 0 | 1 | 0 |
drisspg | 0 | 1 | 0 |
yifuwang | 0 | 1 | 0 |
wz337 | 0 | 1 | 0 |
d4l3k | 0 | 1 | 0 |
zhxchen17 | 0 | 1 | 0 |
eellison | 0 | 1 | 0 |
Aidyn-A | 0 | 1 | 0 |
sraikund16 | 0 | 1 | 0 |
MatzeB | 0 | 1 | 0 |
galv | 0 | 1 | 0 |
nmacchioni | 0 | 1 | 0 |
masnesral | 0 | 1 | 0 |
sinhaanshul | 0 | 1 | 0 |
yaochengji | 0 | 1 | 0 |
yanboliang | 0 | 1 | 0 |
yangsiyu007 | 0 | 1 | 0 |
cdzhan | 0 | 1 | 0 |
jananisriram | 0 | 1 | 0 |
xu-song | 0 | 1 | 0 |
AlexDenisov | 0 | 1 | 0 |
r-barnes | 0 | 1 | 0 |
Hjp-momojiji | 0 | 0 | 1 |
lflis | 0 | 0 | 1 |
Antonio-Moura-Coutinho | 0 | 0 | 1 |
wht0948 | 0 | 0 | 1 |
vadimkantorov | 0 | 0 | 1 |
xle97 | 0 | 0 | 1 |
pietrolesci | 0 | 0 | 1 |
yiliu30 | 0 | 0 | 1 |
lw | 0 | 0 | 1 |
alexdremov | 0 | 0 | 1 |
bryankaplan | 0 | 0 | 1 |
younghuvee | 0 | 0 | 1 |
FabianSchuetze | 0 | 0 | 1 |
ani300 | 0 | 0 | 1 |
jbschlosser | 0 | 0 | 1 |
dannikay | 0 | 0 | 1 |
xmfan | 0 | 0 | 1 |
youkaichao | 0 | 0 | 1 |
tingyangk | 0 | 0 | 1 |
BioGeek | 0 | 0 | 1 |
quinnwillett | 0 | 0 | 1 |
GdoongMathew | 0 | 0 | 1 |
clessig | 0 | 0 | 1 |
enrico-stauss | 0 | 0 | 1 |
david-sitsky | 0 | 0 | 1 |
battaglia01 | 0 | 0 | 1 |
MaltoseFlower | 0 | 0 | 1 |
NicolasHug | 0 | 0 | 1 |
rgommers | 0 | 0 | 1 |
Vremold | 0 | 0 | 1 |
lucasjinreal | 0 | 0 | 1 |
valosekj | 0 | 0 | 1 |
davidberard98 | 0 | 0 | 1 |
curtisvwalker | 0 | 0 | 1 |
tinglvv | 0 | 0 | 1 |
krzysztofjordan | 0 | 0 | 1 |
tylerjereddy | 0 | 0 | 1 |
oulgen | 0 | 0 | 1 |
Qinlong275 | 0 | 0 | 1 |
RobuRishabh | 0 | 0 | 1 |
zxd1997066 | 0 | 0 | 1 |
wangjiangben-hw | 0 | 0 | 1 |
AdrienCourtois | 0 | 0 | 1 |
szmigacz | 0 | 0 | 1 |
joacorapela | 0 | 0 | 1 |
jeffdaily | 0 | 0 | 1 |
yaxan | 0 | 0 | 1 |
guberti | 0 | 0 | 1 |
sidijju | 0 | 0 | 1 |
michaeleisel | 0 | 0 | 1 |
angelayi | 0 | 0 | 1 |
matthost | 0 | 0 | 1 |
blaine-rister | 0 | 0 | 1 |
shuqiangzhang | 0 | 0 | 1 |
deo-abhijit | 0 | 0 | 1 |
etaf | 0 | 0 | 1 |
Coderx7 | 0 | 0 | 1 |
thanga-v2 | 0 | 0 | 1 |