Weekly GitHub Report for Pytorch - 2024-07-22 21:38:48
Weekly GitHub Report for Pytorch
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
I. Issues
1.1 Open Issues
Open Issues This Week: 104
Summarized Issues:
1.2 Top 5 Active Issues:
We consider active issues to be issues that have generated much discussion in the issue's comments.
-
torch._dynamo.exc.Unsupported: call_function args: UserDefinedObjectVariable(EasyDict): This issue involves a user attempting to run a model for inference on an Android device using executorch, but encountering an error
torch._dynamo.exc.Unsupported: call_function args: UserDefinedObjectVariable(EasyDict)
. The user has provided a traceback log and mentioned that the model is based on an existing one without modifications to the modules involved in the error.- The comments discuss the root cause being EasyDict not being dynamo traceable, attempts to use alternatives like AttrDict, and various suggestions to resolve the issue, including using
strict=False
and registering EasyDict as a pytree node. Despite these efforts, the user continues to face issues, including new errors related to unsupported input types and difficulties in reproducing the problem. The user has also created a public repository to help with debugging. - Number of comments: 72
- The comments discuss the root cause being EasyDict not being dynamo traceable, attempts to use alternatives like AttrDict, and various suggestions to resolve the issue, including using
-
[RFC] Per-Parameter-Sharding FSDP: This issue proposes a new design for Fully Sharded Data Parallel (FSDP) in PyTorch, called Per-Parameter-Sharding FSDP, which aims to address limitations in the existing FSDP by sharding each parameter on dimension 0. The new design promises benefits such as flexible mixed-precision all-gather, efficient handling of frozen parameters, communication-free sharded state dicts, and future communication optimizations.
- The comments discuss various aspects of the proposed design, including clarifications on mixed-precision support, initialization and device handling, potential issues with shared parameters, and the integration with other parallelism strategies like tensor and pipeline parallelism. There are also discussions on the challenges of using the new design with existing models, the need for custom kernels to improve performance, and the handling of non-persistent buffers during initialization.
- Number of comments: 54
-
[v.2.4.0] Release Tracker: This issue is a release tracker for the PyTorch 2.4.0 release, detailing the phases and criteria for cherry-picking changes to the release branch. It includes specific instructions for contributors on how to request cherry-picks and the process for approval by the release managers.
- The comments section primarily consists of contributors providing links to their pull requests, specifying the criteria category for their changes, and receiving approval or feedback from the release managers. Most comments indicate successful merges, with occasional requests for additional information or fixes before approval.
- Number of comments: 54
-
ROCm & Windows Support: This issue requests the addition of PyTorch support for AMD GPUs on Windows, leveraging the recently released ROCm Windows support by AMD. The request highlights the need for compatibility to enhance the PyTorch ecosystem on Windows platforms.
- The comments discuss the current lack of PyTorch support on Windows for AMD GPUs, with users expressing frustration and sharing their experiences with ROCm on Linux. Some users mention attempts to use ROCm with various Linux distributions, while others discuss potential workarounds and the need for AMD to improve their support and documentation. There is also a mention of recent updates and driver releases that might help, but overall, the sentiment is one of waiting for official support.
- Number of comments: 52
-
Custom attention recompilations: This issue describes a bug where the compiler cache is exhausted when running a specific function in PyTorch, leading to recompilation warnings and errors. The user is experiencing frequent recompilations due to changes in certain objects and is seeking advice on how to mitigate these recompilations, especially during inference.
- The comments discuss the nature of the recompilations, potential workarounds like increasing the cache size, and the use of specific environment variables to log recompilation reasons. There are also suggestions to modify the code to avoid frequent changes in certain objects and to use
torch.compile
after an initial run to initialize attributes. The conversation includes attempts to reproduce the issue, sharing of logs, and discussions on potential fixes and improvements in the PyTorch framework. - Number of comments: 51
- The comments discuss the nature of the recompilations, potential workarounds like increasing the cache size, and the use of specific environment variables to log recompilation reasons. There are also suggestions to modify the code to avoid frequent changes in certain objects and to use
1.3 Top 5 Quiet Issues:
We consider quiet issues to be issues that have been opened in this project for the longest time. The team should work together to get these issues resolved and closed as soon as possible.
-
JIT input aliasing does not support aten::fill_: This issue highlights a bug encountered when using TensorBoard to output the model structure of the deformable-detr code, resulting in multiple
TracerWarning
andUserWarning
messages related to tensor operations. Additionally, a critical error occurs due to the lack of support for theaten::fill_
operation in PyTorch's JIT alias analysis, causing the process to fail and preventing the graph from being saved.- Open for 364 days, 20 hours, 44 minutes
-
MPS memory issue, MPS backend out of memory, but works if I empty the MPS cache: This issue pertains to a memory management problem with the MPS (Metal Performance Shaders) backend in PyTorch, where the MPS cache does not release memory as expected, leading to out-of-memory errors during model execution. The problem significantly affects performance and can cause application termination unless the MPS cache is manually emptied, as demonstrated by the provided example script.
- Open for 364 days, 11 hours, 20 minutes
-
[FSDP] FSDP doesn't work (random accuracy performance) when using
param_init_fn
andsync_module_states=True
: This issue describes a problem with Fully Sharded Data Parallel (FSDP) in PyTorch, where usingparam_init_fn
andsync_module_states=True
results in random accuracy and F1 performance, indicating that the model is not learning properly. The problem arises when attempting to load a large model only on rank 0 to save CPU RAM, but the expected behavior of proper weight initialization and parameter broadcasting across ranks is not achieved, leading to ineffective training.- Open for 364 days, 09 hours, 35 minutes
-
Strange backward behavior with sparse tensors: This issue describes a bug where the backward pass fails when using sparse gradients in PyTorch, resulting in a
RuntimeError
indicating that thereshape
operation is not implemented for sparse tensors. The provided code snippet demonstrates how to reproduce the error, highlighting the problem with the backward function in a custom autograd function.- Open for 364 days, 08 hours, 46 minutes
-
Report model flop utilization (mfu) in benchmark: This issue involves implementing a method to report model flop utilization (mfu) in a benchmark by running a flop counter on the eager model or using fake tensors in dynamo analysis to compute the theoretical maximum throughput. This approach aims to provide an absolute yardstick for measuring performance, rather than relying on speedup comparisons against the eager model, which can be manipulated by intentionally slowing down the eager model.
- Open for 364 days, 06 hours, 16 minutes
1.4 Closed Issues
Closed Issues This Week: 81
Average Issue Close Time (This Week): 32.93 days
Summarized Issues:
1.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open issues within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open issues from the past week.
II. Pull Requests
2.1 Open Pull Requests
Open Pull Requests This Week: 179
Pull Requests:
- DTensor Enhancements: This topic covers multiple pull requests that introduce various features and optimizations to the DTensor module in PyTorch. These include static padding for local tensors, sharding propagation through operation decomposition, and activation checkpointing differentiation. These changes aim to improve tensor operations and memory management within the DTensor module.
- Clang-Tidy Warnings: Several pull requests address clang-tidy warnings in different parts of the PyTorch codebase. These include resolving warnings in the aten/src/ATen/native directory and enabling clang-tidy in the torch/csrc/distributed directory. These efforts are part of ongoing work to improve code quality and maintainability.
- Inductor and GEMM Operations: Multiple pull requests focus on enhancing the Inductor component and GEMM operations in PyTorch. These include refactoring communication analysis, adding support for k slicing in static shapes, and improving kernel-level benchmarking. These changes aim to optimize performance and extend functionality.
- Kineto and Profiling: This topic includes pull requests that enhance profiling capabilities in PyTorch. These include introducing a kineto-based XPU profiler and populating source and destination ranks for point-to-point kernel operations. These changes aim to provide better profiling support and insights for performance analysis.
- Refactoring and Code Quality: Several pull requests focus on refactoring and improving code quality in various parts of the PyTorch project. These include refactoring the fusion of inplace operations, enhancing the serialization IO infrastructure, and addressing post-land review fixes. These efforts aim to improve code maintainability and functionality.
- Decompositions and Operations: Multiple pull requests introduce decompositions for various functions in PyTorch. These include decompositions for
t_copy
,expand_copy
,squeeze_copy
,unsqueeze_copy
,transpose_copy
, andpermute_copy
. These changes aim to improve the modularity and efficiency of tensor operations.
- Masked Operations: This topic includes pull requests that enhance support for masked operations in PyTorch. These include implementing the
masked_select
operation for NestedTensors, updating the_masked_softmax
function, and adding utility functions likeand_masks
andor_masks
. These changes aim to improve the handling of masked operations.
- Memory Management: Several pull requests address memory management issues in PyTorch. These include fixing memory leaks related to pinned memory, improving memory allocation in the flash-attention mechanism, and introducing a
MemPool
class for better memory pool management. These changes aim to optimize memory usage and prevent allocation conflicts.
- Vectorization and Performance: Multiple pull requests focus on enhancing vectorization and performance in PyTorch. These include support for vectorized operations in
torch.argmax
andtorch.argmin
, vectorized reduction operations fortorch.any
, and improvements to thread blocking heuristics in GEMM implementations. These changes aim to optimize computation and improve performance.
- Error Handling and Logging: This topic includes pull requests that enhance error handling and logging in PyTorch. These include enabling exception chaining for the
BackendCompilerFailed
exception, enhancing error logging in the export functionality, and adding support for multiline traces in themunge_exc
function. These changes aim to improve debugging and error reporting.
2.2 Closed Pull Requests
Closed Pull Requests This Week: 358
Summarized Pull Requests:
2.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open pull requests within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open pull requests from the past week.
III. Commits
3.1 Commits
Commits This Week: 268
Summarized Commits:
- Hotfixes and Bug Fixes: Several commits address specific bugs and issues, such as correcting the GetSize calculation for strided batched GEMM, fixing the 2D state dict error for HSDP, and resolving permission errors in
torch.cuda.memory.list_gpu_processes()
. These changes ensure the stability and correctness of the codebase.
- Autograd and Tensor Operations: Updates to the
cond
function to support autograd, the addition of support for bfloat16 in NAN checks, and the introduction of symbolic integers (SymInts) in the FakeTensor cache highlight ongoing improvements in tensor operations and autograd support.
- Environment and Configuration Management: Replacing manual parsing of environment variables with
std::filesystem::temp_directory_path()
, enabling dynamic migration of jobs to the LF AWS account, and updating the CPython support policy reflect efforts to streamline environment and configuration management.
- Testing and CI Improvements: Reintroducing
test_multiple_output_alias
to the skip list, refactoring diff time tests to use the PT2 Benchmark Runner, and enabling ROCm support for inductor's dynamic_rblock_scaling demonstrate a focus on improving testing and continuous integration processes.
- Performance Optimizations: Optimizing the GEMM template for arbitrary values of
n
, improving cache blocking in the CPP GEMM template, and performance tuning for B2B-GEMM operations indicate ongoing efforts to enhance performance.
- Documentation and Code Quality: Updates to the
tensor.repeat
function documentation, enforcing style guidelines for empty lines within import segments, and adding detailed descriptions to functions likecreate_block_mask
show a commitment to maintaining high code quality and comprehensive documentation.
- New Features and Enhancements: Introducing support for Inter-Process Communication (IPC) for Expandable Segments, adding support for FORMAT_SIMPLE and FORMAT_SPEC in Dynamo, and enabling custom work registration from Python highlight the addition of new features and enhancements.
- Error Handling and Logging: Enhancing error messages for CUDA UpSample, improving error logging in the TCPStoreLibUvBackend, and introducing debug logs for inlining constants reflect efforts to improve error handling and logging.
- Refactoring and Code Cleanup: Refactoring the
aoti_torch__scaled_mm
function, removing non-necessary tests related to the comm tensor, and simplifying thecub::unique_by_key
code indicate ongoing refactoring and code cleanup efforts.
- Distributed and Parallel Computing: Adding support for
GradientEdge
as an output intorch.autograd.grad
, introducing a unit test for Distributed Data Parallel (DDP) and Automatic Control (AC) components, and addressing race conditions in subgroup store prefixes highlight improvements in distributed and parallel computing.
- Backend and Device Support: Introducing hooks for Intel Gaudi devices, enabling the C++ pytree functionality within
torch::deploy
, and adding support for the TO_BOOL operation in Dynamo demonstrate ongoing backend and device support enhancements.
- Memory Management and Optimization: Addressing memory management issues by implementing conservative estimation of plannable inputs, optimizing bias-add computation in GEMM operations, and introducing a post_grad pass to re-inplace operations reflect efforts to optimize memory usage.
- API and Interface Updates: Introducing new top-level APIs for the numeric debugger, adding support for string parameters in aten operations, and updating the
fully_shard
function to accept a list of modules indicate ongoing API and interface updates.
- Security and Permissions: Addressing permission issues in the xpu nightly wheel test, ensuring proper handling of subprocess communication to prevent corruption, and fixing permission errors in
torch.cuda.memory.list_gpu_processes()
highlight security and permissions improvements.
- Compatibility and Portability: Adapting file paths for Windows, ensuring compatibility with different versions of sympy, and addressing issues with the hip-clang compiler reflect efforts to improve compatibility and portability.
- Experimental and Prototype Features: Making the
flex_attention
prototype API public, introducing a flexible model for the WaitCounter backend, and adding support for RaiseException and other operations in non-strict export highlight experimental and prototype feature development.
- Graph and IR Enhancements: Enhancing the training Intermediate Representation (IR), introducing a non-strict implementation of training IR export, and optimizing the
torch.nested.as_nested_tensor(t)
constructor reflect ongoing improvements in graph and IR handling.
- Utility and Helper Functions: Introducing utility functions like
and_masks
andor_masks
, adding aprint_readable
function to the unflattened module, and updating thesafe_mul
function for compatibility with different sympy versions highlight the addition of utility and helper functions.
- Testing and Benchmarking: Introducing a benchmark for flex decoding, adding deterministic state dictionary unit tests for the SparseAdam optimizer, and reducing the number of samples for svd_lowrank and pca_lowrank operations reflect efforts to improve testing and benchmarking.
- Code Generation and Compilation: Propagating buffer and parameter indices through AOT compilation, introducing buffer static input tests to cudagraph trees, and updating the
mark_static_address
function for inlining NN modules highlight improvements in code generation and compilation.
- Error and Exception Handling: Introducing guards in Dynamo to ensure tensor subclass metadata equality, adding exception handling for
nested_tensor_from_jagged
, and addressing issues with thegetAugOp
function reflect ongoing improvements in error and exception handling.
- Optimization and Heuristics: Introducing configuration options for the autoheuristic feature, optimizing the reinplacing pass to ignore unnecessary clones, and addressing issues with the
constant_pad_nd
operation reflect efforts to enhance optimization and heuristics.
- Miscellaneous Updates: Adding new core maintainers, updating the libfmt submodule, and addressing clang-tidy warnings in specific directories reflect various miscellaneous updates and improvements.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, or created at least 1 pull request in the past month.
Contributor | Commits | Pull Requests | Issues |
---|---|---|---|
PyTorch MergeBot | 1007 | 0 | 0 |
XuehaiPan | 0 | 74 | 0 |
hyperkai | 0 | 0 | 60 |
cyyever | 0 | 55 | 0 |
anijain2305 | 0 | 41 | 12 |
ezyang | 0 | 31 | 10 |
Chillee | 0 | 27 | 11 |
zou3519 | 0 | 19 | 19 |
xuhancn | 0 | 34 | 0 |
malfet | 0 | 27 | 4 |
yf225 | 0 | 24 | 3 |
clee2000 | 0 | 22 | 5 |
yushangdi | 0 | 19 | 5 |
peterbell10 | 0 | 20 | 2 |
williamwen42 | 0 | 18 | 4 |
wz337 | 0 | 18 | 2 |
eqy | 0 | 19 | 1 |
mori360 | 0 | 20 | 0 |
etaf | 0 | 10 | 9 |
bdhirsh | 0 | 7 | 11 |
masnesral | 0 | 18 | 0 |
mlazos | 0 | 17 | 1 |
atalman | 0 | 13 | 4 |
vmoens | 0 | 1 | 16 |
huydhn | 0 | 10 | 6 |
drisspg | 0 | 15 | 1 |
fegin | 0 | 16 | 0 |
wconstab | 0 | 14 | 1 |
rec | 0 | 13 | 2 |
eellison | 0 | 13 | 2 |
desertfire | 0 | 6 | 8 |
yifuwang | 0 | 14 | 0 |
yanboliang | 0 | 14 | 0 |
guangyey | 0 | 13 | 0 |
aorenste | 0 | 11 | 1 |
qqaatw | 0 | 12 | 0 |
mikaylagawarecki | 0 | 11 | 1 |
shunting314 | 0 | 6 | 6 |
pianpwk | 0 | 12 | 0 |
awgu | 0 | 9 | 2 |
ZainRizvi | 0 | 10 | 1 |
jbschlosser | 0 | 10 | 1 |
leslie-fang-intel | 0 | 11 | 0 |
wanchaol | 0 | 10 | 1 |
jataylo | 0 | 10 | 0 |
zhxchen17 | 0 | 10 | 0 |
oulgen | 0 | 9 | 1 |
ydwu4 | 0 | 10 | 0 |
sinhaanshul | 0 | 10 | 0 |
AlnisM | 0 | 10 | 0 |
jiashenC | 0 | 8 | 1 |
ZhiweiYan-96 | 0 | 8 | 0 |
jeffdaily | 0 | 5 | 3 |
Skylion007 | 0 | 8 | 0 |
albanD | 0 | 5 | 3 |
ColinPeppler | 0 | 7 | 0 |
joydddd | 0 | 7 | 0 |
aaronenyeshi | 0 | 7 | 0 |
d4l3k | 0 | 7 | 0 |
zxd1997066 | 0 | 1 | 6 |
isuruf | 0 | 6 | 1 |
soulitzer | 0 | 4 | 3 |
BoyuanFeng | 0 | 6 | 0 |
H-Huang | 0 | 4 | 2 |
sijiac | 0 | 4 | 2 |
c-p-i-o | 0 | 6 | 0 |
jansel | 0 | 5 | 1 |
angelayi | 0 | 4 | 2 |
jgong5 | 0 | 6 | 0 |
shuqiangzhang | 0 | 5 | 1 |
Aidyn-A | 0 | 4 | 2 |
nmacchioni | 0 | 5 | 0 |
ani300 | 0 | 3 | 2 |
jamesjwu | 0 | 5 | 0 |
khushi-411 | 0 | 5 | 0 |
jeanschmidt | 0 | 2 | 3 |
sraikund16 | 0 | 5 | 0 |
xuzhao9 | 0 | 5 | 0 |
chuanqi129 | 0 | 4 | 1 |
oraluben | 0 | 4 | 1 |
ppwwyyxx | 0 | 3 | 2 |
peaceorwell | 0 | 4 | 1 |
fduwjj | 0 | 5 | 0 |
guilhermeleobas | 0 | 2 | 3 |
justinchuby | 0 | 3 | 2 |
zdevito | 0 | 5 | 0 |
jianc99 | 0 | 0 | 5 |
kit1980 | 0 | 0 | 5 |
clessig | 0 | 0 | 5 |
chunyuan-w | 0 | 4 | 0 |
cdzhan | 0 | 3 | 1 |
yanbing-j | 0 | 4 | 0 |
r-barnes | 0 | 4 | 0 |
jithunnair-amd | 0 | 2 | 2 |
xmfan | 0 | 3 | 1 |
jovianjaison | 0 | 4 | 0 |
zxiiro | 0 | 4 | 0 |
randolf-scholz | 0 | 3 | 1 |
furtnerthomas | 0 | 4 | 0 |
jerryzh168 | 0 | 3 | 1 |
awayzjj | 0 | 4 | 0 |
XilunWu | 0 | 4 | 0 |
tianyeeT | 0 | 4 | 0 |
syed-ahmed | 0 | 3 | 1 |
awaelchli | 0 | 2 | 2 |
davidberard98 | 0 | 2 | 2 |
xw285cornell | 0 | 3 | 1 |
Danielmic | 0 | 1 | 3 |
henrylhtsang | 0 | 4 | 0 |
tianyu-l | 0 | 4 | 0 |
YangQun1 | 0 | 2 | 2 |
sanketpurandare | 0 | 3 | 0 |
titaiwangms | 0 | 3 | 0 |
dsjohns2 | 0 | 3 | 0 |
jhavukainen | 0 | 3 | 0 |
YuqingJ | 0 | 3 | 0 |
tugsbayasgalan | 0 | 1 | 2 |
jiayisunx | 0 | 3 | 0 |
haocizhang | 0 | 3 | 0 |
Valentine233 | 0 | 3 | 0 |
FindHao | 0 | 3 | 0 |
janeyx99 | 0 | 3 | 0 |
xingyunjohn1 | 0 | 3 | 0 |
helloguo | 0 | 3 | 0 |
zhangfeiv0 | 0 | 2 | 1 |
mayank31398 | 0 | 1 | 2 |
andriigrynenko | 0 | 3 | 0 |
atuljangra | 0 | 3 | 0 |
fwenguang | 0 | 2 | 1 |
IvanKobzarev | 0 | 2 | 1 |
redwrasse | 0 | 2 | 1 |
DiweiSun | 0 | 3 | 0 |
zhuhaozhe | 0 | 3 | 0 |
nicholasw-gc | 0 | 3 | 0 |
ringohoffman | 0 | 2 | 1 |
nikonikolov | 0 | 1 | 2 |
songh11 | 0 | 0 | 3 |
rootjalex | 0 | 0 | 3 |
albertz | 0 | 0 | 3 |
dvrogozh | 0 | 0 | 3 |
CaoE | 0 | 2 | 0 |
TiRune | 0 | 2 | 0 |
dvorst | 0 | 1 | 1 |
shengfukevin | 0 | 2 | 0 |
Shan19900305 | 0 | 2 | 0 |
afrittoli | 0 | 2 | 0 |
kurtamohler | 0 | 1 | 1 |
y-sq | 0 | 1 | 1 |
yaochengji | 0 | 2 | 0 |
crcrpar | 0 | 2 | 0 |
zejun-chen | 0 | 2 | 0 |
siahuat0727 | 0 | 2 | 0 |
egienvalue | 0 | 2 | 0 |
valentinandrei | 0 | 2 | 0 |
manuelcandales | 0 | 2 | 0 |
EikanWang | 0 | 2 | 0 |
jerrymannil | 0 | 2 | 0 |
tbohutyn | 0 | 1 | 1 |
zhouzaida | 0 | 1 | 1 |
PaliC | 0 | 2 | 0 |
hydeparksnow | 0 | 1 | 1 |
majing921201 | 0 | 1 | 1 |
jainapurva | 0 | 2 | 0 |
nvcastet | 0 | 2 | 0 |
aartbik | 0 | 2 | 0 |
mengph | 0 | 1 | 1 |
AlexDenisov | 0 | 2 | 0 |
tenpercent | 0 | 2 | 0 |
sdingcn | 0 | 2 | 0 |
fenypatel99 | 0 | 2 | 0 |
yan-yhy | 0 | 2 | 0 |
jerrychenhf | 0 | 1 | 1 |
sradc | 0 | 1 | 1 |
chenyang78 | 0 | 2 | 0 |
izaitsevfb | 0 | 2 | 0 |
datagero | 0 | 2 | 0 |
tursom | 0 | 1 | 1 |
fengyuan14 | 0 | 2 | 0 |
cccclai | 0 | 2 | 0 |
michaeleisel | 0 | 1 | 1 |
jananisriram | 0 | 2 | 0 |
jamesperng | 0 | 2 | 0 |
MaggieMoss | 0 | 2 | 0 |
q10 | 0 | 2 | 0 |
dshi7 | 0 | 2 | 0 |
WeiChunyu-star | 0 | 2 | 0 |
robert-hardwick | 0 | 2 | 0 |
jianyuh | 0 | 2 | 0 |
maxyanghu | 0 | 1 | 1 |
Microve | 0 | 1 | 1 |
koparasy | 0 | 1 | 1 |
JackCaoG | 0 | 2 | 0 |
brim1754 | 0 | 1 | 1 |
krzysztofjordan | 0 | 1 | 1 |
sayakpaul | 0 | 0 | 2 |
xwang233 | 0 | 0 | 2 |
linzs148 | 0 | 0 | 2 |
dilililiwhy | 0 | 0 | 2 |
SuperKogito | 0 | 0 | 2 |
yuanyao-nv | 0 | 0 | 2 |
staugust | 0 | 0 | 2 |
kabyanil | 0 | 0 | 2 |
Thinksky5124 | 0 | 0 | 2 |
Gwihwan-Go | 0 | 0 | 2 |
rybakov | 0 | 0 | 2 |
youkaichao | 0 | 0 | 2 |
tsengalb99 | 0 | 0 | 2 |
wbigat | 0 | 0 | 2 |
tingyangk | 0 | 0 | 2 |
stswidwinski | 0 | 0 | 2 |
jakelevi1996 | 0 | 0 | 2 |
WeizhuoZhang-intel | 0 | 0 | 2 |
gau-nernst | 0 | 0 | 2 |
xinyu-intel | 0 | 0 | 2 |
gilfree | 0 | 0 | 2 |
benbellick | 0 | 0 | 2 |
KnightGOKU | 0 | 0 | 2 |
GitHub | 1 | 0 | 0 |
kiszk | 0 | 1 | 0 |
houseroad | 0 | 1 | 0 |
mantaionut | 0 | 1 | 0 |
v0lta | 0 | 1 | 0 |
antiagainst | 0 | 1 | 0 |
haodongucsb | 0 | 1 | 0 |
saumishr | 0 | 1 | 0 |
LucasLLC | 0 | 1 | 0 |
iskunk | 0 | 1 | 0 |
kurman | 0 | 1 | 0 |
abhi-ort | 0 | 1 | 0 |
YUNQIUGUO | 0 | 1 | 0 |
milesial | 0 | 1 | 0 |
aakhundov | 0 | 1 | 0 |
pearu | 0 | 1 | 0 |
ZzEeKkAa | 0 | 1 | 0 |
BoyueZheng | 0 | 1 | 0 |
VladimirFokow | 0 | 1 | 0 |
ysiraichi | 0 | 1 | 0 |
laithsakka | 0 | 1 | 0 |
voznesenskym | 0 | 1 | 0 |
ahojnnes | 0 | 1 | 0 |
xytintel | 0 | 1 | 0 |
pratiklp00 | 0 | 1 | 0 |
guoyejun | 0 | 1 | 0 |
JonathanWenger | 0 | 1 | 0 |
frost-intel | 0 | 1 | 0 |
WenleiHe | 0 | 1 | 0 |
hippocookie | 0 | 1 | 0 |
RabbitWhite1 | 0 | 1 | 0 |
richwomanbtc | 0 | 1 | 0 |
Gunale0926 | 0 | 1 | 0 |
qingyunqu | 0 | 1 | 0 |
ENUMERA8OR | 0 | 1 | 0 |
zitongzhan | 0 | 1 | 0 |
AlekseiNikiforovIBM | 0 | 1 | 0 |
842974287 | 0 | 1 | 0 |
Fuzzkatt | 0 | 1 | 0 |
bt2513 | 0 | 1 | 0 |
Xia-Weiwen | 0 | 1 | 0 |
ZhaoqiongZ | 0 | 1 | 0 |
VRSinghHabana | 0 | 1 | 0 |
tmct | 0 | 1 | 0 |
daulet-askarov | 0 | 1 | 0 |
bigfootjon | 0 | 1 | 0 |
houqi | 0 | 1 | 0 |
naromero77amd | 0 | 1 | 0 |
TsukiSky | 0 | 1 | 0 |
AngryLoki | 0 | 1 | 0 |
BeeGass | 0 | 1 | 0 |
nautsimon | 0 | 1 | 0 |
Mustafa-Hassan2001 | 0 | 1 | 0 |
aim-nara | 0 | 1 | 0 |
skotapati | 0 | 1 | 0 |
connernilsen | 0 | 1 | 0 |
Stonepia | 0 | 1 | 0 |
dnikolaev-amd | 0 | 1 | 0 |
mwlon | 0 | 1 | 0 |
dulinriley | 0 | 1 | 0 |
bertmaher | 0 | 1 | 0 |
MatzeB | 0 | 1 | 0 |
galv | 0 | 1 | 0 |
yangsiyu007 | 0 | 1 | 0 |
xu-song | 0 | 1 | 0 |
Alston-Tang | 0 | 1 | 0 |
haampie | 0 | 1 | 0 |
harshabhvr248 | 0 | 1 | 0 |
swolchok | 0 | 1 | 0 |
alugorey | 0 | 1 | 0 |
rlanday | 0 | 1 | 0 |
inkcherry | 0 | 1 | 0 |
sidt-meta | 0 | 1 | 0 |
wlei-llvm | 0 | 1 | 0 |
gag1jain | 0 | 1 | 0 |
ankurneog | 0 | 1 | 0 |
DenisVieriu97 | 0 | 1 | 0 |
alexcdennis | 0 | 1 | 0 |
drewfustin | 0 | 1 | 0 |
tchaikov | 0 | 1 | 0 |
shengbao-zheng | 0 | 1 | 0 |
ahmadsarvmeily | 0 | 1 | 0 |
dan-jacobson | 0 | 1 | 0 |
soumith | 0 | 1 | 0 |
cchan | 0 | 1 | 0 |
DellCurry | 0 | 1 | 0 |
Ryo-not-rio | 0 | 1 | 0 |
frostedoyster | 0 | 1 | 0 |
charlie-wt | 0 | 1 | 0 |
hongxiayang | 0 | 1 | 0 |
lessw2020 | 0 | 1 | 0 |
adriaorenstein | 0 | 1 | 0 |
zixi-qi | 0 | 1 | 0 |
pragupta | 0 | 1 | 0 |
Theo-Cheynel | 0 | 1 | 0 |
zertosh | 0 | 1 | 0 |
m1guelperez | 0 | 1 | 0 |
adhithadias | 0 | 1 | 0 |
chuanhaozhuge | 0 | 1 | 0 |
kirtiteja | 0 | 1 | 0 |
uniartisan | 0 | 1 | 0 |
trixirt | 0 | 1 | 0 |
sanchitintel | 0 | 1 | 0 |
CuiYifeng | 0 | 1 | 0 |
ashwani-rathee | 0 | 1 | 0 |
arui-meta | 0 | 1 | 0 |
Luthaf | 0 | 1 | 0 |
avikchaudhuri | 0 | 1 | 0 |
danzimm | 0 | 1 | 0 |
Gasoonjia | 0 | 1 | 0 |
CCLDArjun | 0 | 0 | 1 |
ad8e | 0 | 0 | 1 |
quasinnovate | 0 | 0 | 1 |
tianqiong123 | 0 | 0 | 1 |
davidtweedle | 0 | 0 | 1 |
hi20240217 | 0 | 0 | 1 |
andoorve | 0 | 0 | 1 |
rpwang17 | 0 | 0 | 1 |
HagaiAstrin | 0 | 0 | 1 |
marib00 | 0 | 0 | 1 |
WangYutao1995 | 0 | 0 | 1 |
alinpahontu2912 | 0 | 0 | 1 |
ChuBoning | 0 | 0 | 1 |
qyhfrank | 0 | 0 | 1 |
peri044 | 0 | 0 | 1 |
shehper | 0 | 0 | 1 |
4grass | 0 | 0 | 1 |
weifengpy | 0 | 0 | 1 |
beratuna | 0 | 0 | 1 |
straygar | 0 | 0 | 1 |
mitkotak | 0 | 0 | 1 |
AnnaTrainingG | 0 | 0 | 1 |
LittleLittleCloud | 0 | 0 | 1 |
perwez12 | 0 | 0 | 1 |
vinayakdsci | 0 | 0 | 1 |
X-Bruce-Y | 0 | 0 | 1 |
Tisha-linkenite | 0 | 0 | 1 |
vedaanta | 0 | 0 | 1 |
zezhang | 0 | 0 | 1 |
ArenaGrenade | 0 | 0 | 1 |
weipeng-1992 | 0 | 0 | 1 |
rob-hen | 0 | 0 | 1 |
lixin-sxty | 0 | 0 | 1 |
dnitti-psee | 0 | 0 | 1 |
HichemAK | 0 | 0 | 1 |
JieRen98 | 0 | 0 | 1 |
Li357 | 0 | 0 | 1 |
baldassarreFe | 0 | 0 | 1 |
JonasGeiping | 0 | 0 | 1 |
YingLiGithub | 0 | 0 | 1 |
1716775457damn | 0 | 0 | 1 |
judicaelclair | 0 | 0 | 1 |
johnc-keen | 0 | 0 | 1 |
Abhishekghosh1998 | 0 | 0 | 1 |
Laurick1 | 0 | 0 | 1 |
LOOKCC | 0 | 0 | 1 |
HaoyuLiu12 | 0 | 0 | 1 |
NeoLegends | 0 | 0 | 1 |
jokercw147 | 0 | 0 | 1 |
xfchangwei | 0 | 0 | 1 |
amitchawla1 | 0 | 0 | 1 |
wbigat2 | 0 | 0 | 1 |
Leonardo-Russo | 0 | 0 | 1 |
dbl001 | 0 | 0 | 1 |
lezcano | 0 | 0 | 1 |
ZenithGenius | 0 | 0 | 1 |
rogaits | 0 | 0 | 1 |
hanwen-sun | 0 | 0 | 1 |
s1030512149 | 0 | 0 | 1 |
sealoongleft | 0 | 0 | 1 |
shyakocat | 0 | 0 | 1 |
blackyang | 0 | 0 | 1 |
v4if | 0 | 0 | 1 |
Hjp-momojiji | 0 | 0 | 1 |
lflis | 0 | 0 | 1 |
Antonio-Moura-Coutinho | 0 | 0 | 1 |
wht0948 | 0 | 0 | 1 |
vadimkantorov | 0 | 0 | 1 |
xle97 | 0 | 0 | 1 |
pietrolesci | 0 | 0 | 1 |
yiliu30 | 0 | 0 | 1 |
lw | 0 | 0 | 1 |
alexdremov | 0 | 0 | 1 |
bryankaplan | 0 | 0 | 1 |
younghuvee | 0 | 0 | 1 |
FabianSchuetze | 0 | 0 | 1 |
dannikay | 0 | 0 | 1 |
BioGeek | 0 | 0 | 1 |
quinnwillett | 0 | 0 | 1 |
GdoongMathew | 0 | 0 | 1 |
enrico-stauss | 0 | 0 | 1 |
david-sitsky | 0 | 0 | 1 |
battaglia01 | 0 | 0 | 1 |
MaltoseFlower | 0 | 0 | 1 |
NicolasHug | 0 | 0 | 1 |
rgommers | 0 | 0 | 1 |
Vremold | 0 | 0 | 1 |
lucasjinreal | 0 | 0 | 1 |
valosekj | 0 | 0 | 1 |
curtisvwalker | 0 | 0 | 1 |
tinglvv | 0 | 0 | 1 |
tylerjereddy | 0 | 0 | 1 |
Qinlong275 | 0 | 0 | 1 |
RobuRishabh | 0 | 0 | 1 |
wangjiangben-hw | 0 | 0 | 1 |
AdrienCourtois | 0 | 0 | 1 |
szmigacz | 0 | 0 | 1 |
joacorapela | 0 | 0 | 1 |
yaxan | 0 | 0 | 1 |
guberti | 0 | 0 | 1 |
sidijju | 0 | 0 | 1 |
matthost | 0 | 0 | 1 |
blaine-rister | 0 | 0 | 1 |
deo-abhijit | 0 | 0 | 1 |
Coderx7 | 0 | 0 | 1 |
thanga-v2 | 0 | 0 | 1 |
rteehas | 0 | 0 | 1 |
abcamiletto | 0 | 0 | 1 |
akihironitta | 0 | 0 | 1 |
muellerzr | 0 | 0 | 1 |
Zzv213 | 0 | 0 | 1 |
optstat | 0 | 0 | 1 |
UnbearableFate | 0 | 0 | 1 |
cora-codes | 0 | 0 | 1 |
airsplay | 0 | 0 | 1 |
xTayEx | 0 | 0 | 1 |
SperenzaNarra | 0 | 0 | 1 |
KukumavMozolo | 0 | 0 | 1 |
PeterSH6 | 0 | 0 | 1 |
rohitdwivedula | 0 | 0 | 1 |
aabtop | 0 | 0 | 1 |
rvijayc | 0 | 0 | 1 |
PierrunoYT | 0 | 0 | 1 |
zhaohm14 | 0 | 0 | 1 |
accelerate321 | 0 | 0 | 1 |
SalmanMohammadi | 0 | 0 | 1 |
Ignasijus | 0 | 0 | 1 |
jamied157 | 0 | 0 | 1 |
yezhengmao1 | 0 | 0 | 1 |
fxmarty | 0 | 0 | 1 |
ausstein | 0 | 0 | 1 |
rohan-tan-bhowmik | 0 | 0 | 1 |
BalancedTernary | 0 | 0 | 1 |
njzjz | 0 | 0 | 1 |
asglover | 0 | 0 | 1 |
chadeos | 0 | 0 | 1 |
AlexanderDokuchaev | 0 | 0 | 1 |
leitian | 0 | 0 | 1 |
kangchengX | 0 | 0 | 1 |
Tim-Salzmann | 0 | 0 | 1 |
ivodopyanov | 0 | 0 | 1 |
Hongjie1Chu | 0 | 0 | 1 |
platers | 0 | 0 | 1 |
embg | 0 | 0 | 1 |
coogle | 0 | 0 | 1 |
carmocca | 0 | 0 | 1 |
northfun | 0 | 0 | 1 |
nbqu | 0 | 0 | 1 |
wangkl2 | 0 | 0 | 1 |
sujuyu | 0 | 0 | 1 |
ben-da6 | 0 | 0 | 1 |
biuq | 0 | 0 | 1 |
ojh31 | 0 | 0 | 1 |
bigmover | 0 | 0 | 1 |
ConnollyLeon | 0 | 0 | 1 |
mattiadg | 0 | 0 | 1 |
Giodiro | 0 | 0 | 1 |
david-stojanovski | 0 | 0 | 1 |
psandovalsegura | 0 | 0 | 1 |
JamesMBartlett | 0 | 0 | 1 |
EGanji | 0 | 0 | 1 |
aws-caijune | 0 | 0 | 1 |
Cztery | 0 | 0 | 1 |
unsatisfying | 0 | 0 | 1 |
fjneumann | 0 | 0 | 1 |
fffelix-huang | 0 | 0 | 1 |
Ly-Lynn | 0 | 0 | 1 |
moghadas76 | 0 | 0 | 1 |
emosy | 0 | 0 | 1 |
Quoding | 0 | 0 | 1 |
ajindal1 | 0 | 0 | 1 |
urstrulyvishtan | 0 | 0 | 1 |
Gamer-Guy12 | 0 | 0 | 1 |
Picaloer | 0 | 0 | 1 |
SanityRemnants | 0 | 0 | 1 |
emmaking-smith | 0 | 0 | 1 |
maruel | 0 | 0 | 1 |
svekars | 0 | 0 | 1 |
Badr-MOUFAD | 0 | 0 | 1 |
kentanabe | 0 | 0 | 1 |
martintmv-git | 0 | 0 | 1 |
seungjun-green | 0 | 0 | 1 |
dcaustin33 | 0 | 0 | 1 |
mfbalin | 0 | 0 | 1 |
Amir9663 | 0 | 0 | 1 |
ItamarKanter | 0 | 0 | 1 |
wenbindu | 0 | 0 | 1 |
MoFHeka | 0 | 0 | 1 |
rlrs | 0 | 0 | 1 |
kshitij12345 | 0 | 0 | 1 |
johnmarktaylor91 | 0 | 0 | 1 |
kaiyuyue | 0 | 0 | 1 |
samskalicky | 0 | 0 | 1 |
tehbone | 0 | 0 | 1 |