Weekly GitHub Report for Pytorch - 2024-07-22 12:00:10
Weekly GitHub Report for Pytorch
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
I. Issues
1.1 Open Issues
Open Issues This Week: 109
Summarized Issues:
- Segmentation Faults in PyTorch on macOS and Windows: Multiple issues report segmentation faults occurring under different circumstances in PyTorch. These include converting large NumPy arrays to PyTorch tensors on macOS, calling
saved_tensors_hooks.__exit__
during the backward pass, and loading TorchScript models in C++ applications on Windows. These faults highlight platform-specific challenges and the need for robust error handling in PyTorch.
- Dynamo and TorchScript Compilation Issues: Several issues describe problems with PyTorch's Dynamo and TorchScript compilation processes. These include unsupported function calls, graph breaks, and runtime errors when using specific functions or modules. These issues suggest the need for improved support and error handling in PyTorch's compilation mechanisms.
- github.com/pytorch/pytorch/issues/130719
- github.com/pytorch/pytorch/issues/130720
- github.com/pytorch/pytorch/issues/130722
- github.com/pytorch/pytorch/issues/130726
- github.com/pytorch/pytorch/issues/130750
- github.com/pytorch/pytorch/issues/130758
- github.com/pytorch/pytorch/issues/130761
- github.com/pytorch/pytorch/issues/130768
- github.com/pytorch/pytorch/issues/130776
- github.com/pytorch/pytorch/issues/130791
- github.com/pytorch/pytorch/issues/130792
- github.com/pytorch/pytorch/issues/130795
- github.com/pytorch/pytorch/issues/130807
- github.com/pytorch/pytorch/issues/130810
- github.com/pytorch/pytorch/issues/130815
- github.com/pytorch/pytorch/issues/130820
- github.com/pytorch/pytorch/issues/130825
- github.com/pytorch/pytorch/issues/130826
- github.com/pytorch/pytorch/issues/130828
- github.com/pytorch/pytorch/issues/130829
- github.com/pytorch/pytorch/issues/130840
- github.com/pytorch/pytorch/issues/130847
- github.com/pytorch/pytorch/issues/130859
- github.com/pytorch/pytorch/issues/130861
- github.com/pytorch/pytorch/issues/130863
- github.com/pytorch/pytorch/issues/130878
- github.com/pytorch/pytorch/issues/130916
- github.com/pytorch/pytorch/issues/130917
- github.com/pytorch/pytorch/issues/130918
- github.com/pytorch/pytorch/issues/130920
- github.com/pytorch/pytorch/issues/130927
- github.com/pytorch/pytorch/issues/130928
- github.com/pytorch/pytorch/issues/130930
- github.com/pytorch/pytorch/issues/130931
- github.com/pytorch/pytorch/issues/130932
- github.com/pytorch/pytorch/issues/130948
- github.com/pytorch/pytorch/issues/130950
- github.com/pytorch/pytorch/issues/130951
- github.com/pytorch/pytorch/issues/130953
- github.com/pytorch/pytorch/issues/130958
- github.com/pytorch/pytorch/issues/130960
- github.com/pytorch/pytorch/issues/130968
- github.com/pytorch/pytorch/issues/130975
- github.com/pytorch/pytorch/issues/130978
- github.com/pytorch/pytorch/issues/130980
- github.com/pytorch/pytorch/issues/130985
- github.com/pytorch/pytorch/issues/130999
- github.com/pytorch/pytorch/issues/131009
- github.com/pytorch/pytorch/issues/131011
- github.com/pytorch/pytorch/issues/131019
- github.com/pytorch/pytorch/issues/131020
- github.com/pytorch/pytorch/issues/131022
- github.com/pytorch/pytorch/issues/131025
- github.com/pytorch/pytorch/issues/131027
- github.com/pytorch/pytorch/issues/131035
- github.com/pytorch/pytorch/issues/131040
- github.com/pytorch/pytorch/issues/131043
- github.com/pytorch/pytorch/issues/131045
- github.com/pytorch/pytorch/issues/131047
- github.com/pytorch/pytorch/issues/131050
- github.com/pytorch/pytorch/issues/131054
- github.com/pytorch/pytorch/issues/131055
- github.com/pytorch/pytorch/issues/131062
- github.com/pytorch/pytorch/issues/131066
- github.com/pytorch/pytorch/issues/131070
- github.com/pytorch/pytorch/issues/131072
- github.com/pytorch/pytorch/issues/131110
- github.com/pytorch/pytorch/issues/131113
- github.com/pytorch/pytorch/issues/131130
- github.com/pytorch/pytorch/issues/131148
- github.com/pytorch/pytorch/issues/131150
- github.com/pytorch/pytorch/issues/131154
- github.com/pytorch/pytorch/issues/131185
- github.com/pytorch/pytorch/issues/131189
- github.com/pytorch/pytorch/issues/131192
- github.com/pytorch/pytorch/issues/131196
- github.com/pytorch/pytorch/issues/131213
- github.com/pytorch/pytorch/issues/131245
- github.com/pytorch/pytorch/issues/131254
- github.com/pytorch/pytorch/issues/131257
- github.com/pytorch/pytorch/issues/131263
- github.com/pytorch/pytorch/issues/131265
- github.com/pytorch/pytorch/issues/131272
- github.com/pytorch/pytorch/issues/131273
- github.com/pytorch/pytorch/issues/131274
- github.com/pytorch/pytorch/issues/131279
- github.com/pytorch/pytorch/issues/131280
- github.com/pytorch/pytorch/issues/131283
- github.com/pytorch/pytorch/issues/131284
- github.com/pytorch/pytorch/issues/131285
- github.com/pytorch/pytorch/issues/131290
- github.com/pytorch/pytorch/issues/131292
- github.com/pytorch/pytorch/issues/131294
- github.com/pytorch/pytorch/issues/131299
1.2 Top 5 Active Issues:
We consider active issues to be issues that have generated much discussion in the issue's comments.
-
[v.2.4.0] Release Tracker: This issue is a release tracker for the 2.4.0 version of the PyTorch project, detailing the phases and criteria for cherry-picking changes into the release branch. It includes specific guidelines for what types of changes are allowed during different phases of the release process and outlines the steps for submitting a cherry-pick request.
- The comments section consists of multiple requests for cherry-picking specific changes into the release branch, with each request including links to the relevant pull requests and the criteria category. Most comments end with a confirmation of the merge by a release manager, and some include additional discussions or requests for more information.
- Number of comments: 54
-
Undefined symbol: cuOccupancyMaxActiveClusters: This issue is about a failure in
torch.compile
on the nightly build of PyTorch, specifically when using theinductor
backend, due to an undefined symbol error related tocuOccupancyMaxActiveClusters
. The problem appears to be linked to CUDA version compatibility, with discussions around whether the issue is specific to CUDA 11.x and potential fixes involving updates to Triton.- The comments discuss the compatibility of CUDA 11.x wheels, the specifics of the error, and the environment setup. There are suggestions to test with different versions, and a potential fix is proposed. The conversation also touches on CI testing practices and the need for proper coverage for CUDA 11.x. The issue is confirmed to persist in PyTorch 2.2 RC1, and there is a suggestion to update Triton to resolve the problem.
- Number of comments: 28
-
Fused Linear and Cross-Entropy Loss
torch.nn.functional.linear_cross_entropy
: This issue proposes the addition of a fused linear and cross-entropy loss function,torch.nn.functional.linear_cross_entropy
, to PyTorch. The motivation behind this feature is to reduce VRAM consumption by avoiding the materialization of intermediate logits, which is particularly beneficial for models with large vocabularies and batch sizes.- The comments discuss the feasibility and implementation of the proposed function, including prototype code, potential optimizations, and benchmarks. Various contributors share their experiences, suggest improvements, and provide performance metrics, highlighting the potential VRAM and computational benefits of the fused function.
- Number of comments: 27
-
PyTorch 2.4 windows performance regression compared with 0410 nightly: This issue reports a performance regression in the Windows version of PyTorch 2.4 compared to a previous nightly build from April 10, 2024. The user provides detailed performance metrics for various models and requests assistance in identifying the specific nightly build that introduced the regression.
- The comments discuss potential causes of the regression, including changes in AVX2/AVX512 and MKL linking, and suggest testing various nightly builds to pinpoint the issue. They also mention plans to revert certain changes and test new build options, eventually confirming that the performance issue was resolved in the latest nightly build.
- Number of comments: 19
-
Support Dynamo level Caching: This issue is about the need for caching in
torch.compile
to improve the development speed when working with large models like Llama2 7B, especially in scenarios where multiple input shape combinations need to be pre-compiled. The current lack of dynamic shape support in PyTorch/XLA results in significant delays during the warm-up phase, prompting a request for a caching mechanism to mitigate this problem.- The comments discuss potential solutions, including AOTAutograd caching and persistent disk caching, and explore the limitations and compatibility issues with PyTorch/XLA. There are also technical exchanges about dynamic shape support, FX graph caching, and specific implementation challenges related to XLA and AOTAutograd caching. The conversation concludes with a resolution to a bug in the test case affecting the
torch._dynamo.config.assume_static_by_default
setting. - Number of comments: 15
- The comments discuss potential solutions, including AOTAutograd caching and persistent disk caching, and explore the limitations and compatibility issues with PyTorch/XLA. There are also technical exchanges about dynamic shape support, FX graph caching, and specific implementation challenges related to XLA and AOTAutograd caching. The conversation concludes with a resolution to a bug in the test case affecting the
1.3 Top 5 Quiet Issues:
We consider quiet issues to be issues that have been opened in this project for the longest time. The team should work together to get these issues resolved and closed as soon as possible.
-
[FX][ONNX][exporter] Failed to export traced fx graph to onnx model: This issue pertains to a failure encountered when attempting to export a traced and quantized ResNet50 model to the ONNX format using the
torch.onnx.dynamo_export
function. The error message indicates that the process fails due to an unsupported operation related to quantized tensors in the meta tensors framework, resulting in aTorchRuntimeError
.- Open for 348 days, 20 hours, 00 minutes
-
Translation layer (similar to torch_np) that can reliably lift Python operations into Tensor operations: This issue involves the need for a translation layer that can convert operations on native Python data types such as ints, floats, and bools into corresponding tensor operations within PyTorch, similar to an existing layer for Numpy programs. The challenge lies in handling various edge cases, such as differences in type promotion rules, device compatibility, and precision discrepancies between Python scalars and tensor operations.
- Open for 340 days, 19 hours, 02 minutes
-
[nightly][jit] bad constant exponent (e+38.f) in default_program fused_mul_div_add: This issue pertains to a bug in the PyTorch 2.1.0 Nightly version where the
torch.jit.trace()
function generates incorrect C++ CUDA code containing malformed constants, specifically-3.402823466385289e+38.f
instead of the correct-3.402823466385289e+38f
. This error causes runtime failures during the execution of traced models, as demonstrated in the provided example code and error messages.- Open for 337 days, 19 hours, 57 minutes
-
Enable FlashAttentionV2 on Windows: This issue is about enabling the FlashAttentionV2 kernel within the PyTorch core to support the Windows operating system. It is currently being tracked to ensure that the FlashAttention kernel, which is being updated to V2, becomes compatible with Windows, as it presently does not support this platform.
- Open for 327 days, 04 hours, 03 minutes
-
Can't initializa NVML: This issue pertains to a bug where the user is unable to initialize NVML (NVIDIA Management Library) when using PyTorch version 2.0.1+cu117. As a result, the user encounters warnings and the system fails to recognize any available CUDA devices, indicating that
torch.cuda.is_available()
returnsFalse
.- Open for 313 days, 16 hours, 06 minutes
1.4 Closed Issues
Closed Issues This Week: 89
Average Issue Close Time (This Week): 43.52 days
Summarized Issues:
1.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open issues within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open issues from the past week.
II. Pull Requests
2.1 Open Pull Requests
Open Pull Requests This Week: 172
Pull Requests:
2.2 Closed Pull Requests
Closed Pull Requests This Week: 373
Summarized Pull Requests:
2.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open pull requests within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open pull requests from the past week.
III. Commits
3.1 Commits
Commits This Week: 278
Summarized Commits:
- Module Transition Fixes: This commit addresses and fixes issues related to updating the current module during transitions between child and parent modules, as well as during the backward pass, by modifying hooks and correcting parent module tracking, and includes updates to examples and test cases to ensure compatibility and correctness.
- Function Removal: This commit removes the
mark_node_as_mutating
function from the inductor component of the PyTorch project, as part of a resubmission of a previous pull request (#129346), and has been approved by a reviewer with dependencies on three other pull requests (#130831, #130832, #130833). - Error Message Enhancement: This commit enhances the error message for CUDA UpSample in PyTorch by including the method name and actual tensor size, addressing the issue reported in https://github.com/pytorch/pytorch/issues/131185.
- Coding Style Enforcement: This commit enforces a coding style for empty lines within import segments in the
torch/_i*/
directory, with most changes being auto-generated by a linter, as part of a pull request that has been approved and resolved. - Maintainer Addition: This commit adds Alban Desmaison and Piotr Bialecki to the list of PyTorch Core Maintainers, as announced in the official PyTorch discussion forum and resolved through Pull Request #130903.
- Runtime Error Fix: This commit addresses a runtime error related to invalid configuration arguments when launching a kernel with large indices in PyTorch, by correcting the handling of large output sizes and ensuring compatibility with ROCm, and includes a test to verify the fix.
- Cache Blocking Optimization: This commit improves the cache blocking in the CPP GEMM template for single-threaded cases by utilizing CPU information such as L1 and L2 cache sizes, resulting in performance speedups for specific models, while deferring multi-threaded optimizations to a future update.
- Packed GEMM Optimization: This commit optimizes the packed GEMM template in the PyTorch project by padding the dimension
n
to a multiple ofregister_block_n
(such as 8, 16, or 32) to improve performance, particularly whenn
is not already a multiple of these values, resulting in significant performance improvements on specific hardware configurations. - Intel Gaudi Support: This commit introduces hooks to support execution on Intel Gaudi devices in PyTorch, including adding dtype exceptions for Gaudi/HPU and extending the onlyNativeDevices decorator to accommodate additional devices.
- Invariant Violation Fix: This commit addresses an issue where a specific path with freezing enabled violated the invariant that all inputs must have the "tensor_dict" meta, by ensuring that the
register_attr_or_module
function also sets the tensor_dict meta. - Function Description Addition: This commit adds descriptions to the
create_block_mask
function and modifies the mask functionality, as part of the changes proposed and approved in Pull Request #131209 on the PyTorch GitHub repository. - All-Reduce Process Fixes: This commit addresses two issues in the two-shot all-reduce process by ensuring consistent reduction order across ranks and correcting the use of
get_buffer_ptrs_dev
toget_buffer_ptrs
when migrating to SymmetricMemory, thereby resolving the problem described in https://github.com/pytorch/pytorch/issues/131215. - Library Update: This commit updates the
optree
library to version 0.12.1, incorporating major updates such as a context manager for dictionary sorting, new accessor APIs, support for Python 3.13 via thestable
tag forpybind11
, a fix for a potential segmentation fault during pickling, and a regression fix for warnings during import when launched with strict warning filters. - TrainingIRToRunDecomp Fixes: This commit fixes issues related to zero argument exports in the TrainingIRToRunDecomp process, addresses retracing failures, and removes the eliminate_dead_code() function in _unlift to prevent inconsistencies between the transformed graph and the original signature.
- IPC Conflict Resolution: This commit addresses an issue with inter-process communication (IPC) in the PyTorch project by creating new pipes for subprocess communication instead of using stdin/stdout, which resolves a conflict caused by deepspeed and onnxruntime-training writing to stdout and corrupting the IPC.
- Flex-Attention Support: This commit introduces the use of multiple outputs for flex-attention in the inductor module, addressing a Dead Code Elimination (DCE) issue, and is a resubmission of a previous pull request (#129344).
- TritonKernel Support: This commit modifies the
UserDefinedTritonKernel
to support multiple outputs by consolidating multipleMutationOutput
operations into a single scheduler node, thereby improving the scheduling process. - Buffer and Operation Separation: This commit separates the concepts of Buffer and Operation within the scheduler to distinguish between a tensor's physical storage and the computation that produces it, thereby enabling multiple outputs from a single operation.
- Channel Shuffle Decomposition: This commit introduces a decomposition for the
channel_shuffle
function as part of the PyTorch project, resolving Pull Request #118775, which was approved by GitHub user peterbell10. - Sympy Version Pinning: This commit pins the
sympy
library to version 1.13.0 or higher for Python 3.9 and above, while maintaining version 1.12.1 for Python 3.8, due to breaking changes introduced insympy
1.13.0 that affect test compatibility. - Unused Parameter Warnings: This commit addresses and resolves warnings related to unused parameters in the codebase, as part of an ongoing effort to improve code quality, following a previous related commit (#130924) and has been approved through a pull request (#131170).
- Functorch Library Fix: This commit addresses an issue in the functorch library by ensuring that saved tensor hooks errors are only applied to gradient and vector-Jacobian product (vjp) transforms, while allowing vectorized map (vmap) and Jacobian-vector product (jvp) transforms to function without restrictions, as they operate above PyTorch autograd and save regular tensors.
- B2B-GEMM Performance Tuning: This commit focuses on performance tuning for B2B-GEMM (Back-to-Back General Matrix Multiply) within the Inductor component of the PyTorch project, and includes tests to validate the improvements, as detailed in pull request #130778 which has been approved by the user eellison.
- Variable Type Check Fix: This commit addresses a bug in the PyTorch project by correcting the method used to check the variable type of
entry.numel
in theIterationRangesEntry
class, ensuring that the data type is verified assympy.Integer
instead of a generic integer type. - Communication Resource Sharing Option: This commit introduces an option to the
pg_config
to allow users to choose not to share communication resources, enabling communication options to overlap, as part of the changes discussed in issue #129865 and tested with augmented unit tests.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, or created at least 1 pull request in the past month.
Contributor | Commits | Pull Requests | Issues |
---|---|---|---|
PyTorch MergeBot | 292 | 0 | 0 |
XuehaiPan | 0 | 55 | 0 |
hyperkai | 0 | 0 | 30 |
cyyever | 0 | 27 | 0 |
anijain2305 | 0 | 16 | 4 |
williamwen42 | 0 | 18 | 1 |
peterbell10 | 0 | 15 | 1 |
Chillee | 0 | 12 | 4 |
ezyang | 0 | 10 | 4 |
mlazos | 0 | 14 | 0 |
zou3519 | 0 | 5 | 9 |
rec | 0 | 10 | 2 |
masnesral | 0 | 12 | 0 |
drisspg | 0 | 11 | 1 |
eellison | 0 | 10 | 1 |
yf225 | 0 | 7 | 2 |
AlnisM | 0 | 9 | 0 |
wanchaol | 0 | 8 | 1 |
etaf | 0 | 4 | 5 |
guangyey | 0 | 8 | 0 |
malfet | 0 | 8 | 0 |
yifuwang | 0 | 7 | 0 |
clee2000 | 0 | 5 | 2 |
aorenste | 0 | 7 | 0 |
wconstab | 0 | 6 | 1 |
wz337 | 0 | 6 | 1 |
xuhancn | 0 | 7 | 0 |
vmoens | 0 | 0 | 7 |
jiashenC | 0 | 6 | 0 |
ZainRizvi | 0 | 6 | 0 |
sinhaanshul | 0 | 6 | 0 |
eqy | 0 | 6 | 0 |
BoyuanFeng | 0 | 5 | 0 |
desertfire | 0 | 4 | 1 |
angelayi | 0 | 4 | 1 |
mori360 | 0 | 5 | 0 |
jataylo | 0 | 5 | 0 |
Skylion007 | 0 | 5 | 0 |
bdhirsh | 0 | 3 | 2 |
yushangdi | 0 | 3 | 2 |
shuqiangzhang | 0 | 4 | 1 |
jgong5 | 0 | 5 | 0 |
oulgen | 0 | 5 | 0 |
zdevito | 0 | 5 | 0 |
awgu | 0 | 4 | 1 |
jianc99 | 0 | 0 | 5 |
jamesjwu | 0 | 4 | 0 |
chuanqi129 | 0 | 4 | 0 |
leslie-fang-intel | 0 | 4 | 0 |
ZhiweiYan-96 | 0 | 4 | 0 |
pianpwk | 0 | 4 | 0 |
zxd1997066 | 0 | 1 | 3 |
isuruf | 0 | 3 | 1 |
jbschlosser | 0 | 4 | 0 |
qqaatw | 0 | 4 | 0 |
XilunWu | 0 | 4 | 0 |
soulitzer | 0 | 2 | 2 |
Aidyn-A | 0 | 3 | 1 |
zhxchen17 | 0 | 4 | 0 |
guilhermeleobas | 0 | 2 | 2 |
albanD | 0 | 3 | 1 |
awaelchli | 0 | 2 | 2 |
joydddd | 0 | 4 | 0 |
jeffdaily | 0 | 3 | 1 |
Danielmic | 0 | 1 | 3 |
tianyu-l | 0 | 4 | 0 |
YangQun1 | 0 | 2 | 2 |
chunyuan-w | 0 | 3 | 0 |
fegin | 0 | 3 | 0 |
jiayisunx | 0 | 3 | 0 |
yanbing-j | 0 | 3 | 0 |
xuzhao9 | 0 | 3 | 0 |
mikaylagawarecki | 0 | 3 | 0 |
oraluben | 0 | 2 | 1 |
syed-ahmed | 0 | 3 | 0 |
justinchuby | 0 | 2 | 1 |
ColinPeppler | 0 | 3 | 0 |
ydwu4 | 0 | 3 | 0 |
andriigrynenko | 0 | 3 | 0 |
ppwwyyxx | 0 | 2 | 1 |
davidberard98 | 0 | 2 | 1 |
xw285cornell | 0 | 2 | 1 |
peaceorwell | 0 | 3 | 0 |
fwenguang | 0 | 2 | 1 |
IvanKobzarev | 0 | 2 | 1 |
redwrasse | 0 | 2 | 1 |
fduwjj | 0 | 3 | 0 |
henrylhtsang | 0 | 3 | 0 |
nicholasw-gc | 0 | 3 | 0 |
ringohoffman | 0 | 2 | 1 |
crcrpar | 0 | 2 | 0 |
majing921201 | 0 | 1 | 1 |
nvcastet | 0 | 2 | 0 |
mengph | 0 | 1 | 1 |
tenpercent | 0 | 2 | 0 |
jerrychenhf | 0 | 1 | 1 |
sradc | 0 | 1 | 1 |
sijiac | 0 | 2 | 0 |
xingyunjohn1 | 0 | 2 | 0 |
haocizhang | 0 | 2 | 0 |
datagero | 0 | 2 | 0 |
jananisriram | 0 | 2 | 0 |
WeiChunyu-star | 0 | 2 | 0 |
H-Huang | 0 | 2 | 0 |
jerryzh168 | 0 | 2 | 0 |
YuqingJ | 0 | 2 | 0 |
robert-hardwick | 0 | 2 | 0 |
atalman | 0 | 1 | 1 |
atuljangra | 0 | 2 | 0 |
jianyuh | 0 | 2 | 0 |
aaronenyeshi | 0 | 2 | 0 |
maxyanghu | 0 | 1 | 1 |
Microve | 0 | 1 | 1 |
d4l3k | 0 | 2 | 0 |
zxiiro | 0 | 2 | 0 |
koparasy | 0 | 1 | 1 |
JackCaoG | 0 | 2 | 0 |
DiweiSun | 0 | 2 | 0 |
brim1754 | 0 | 1 | 1 |
jeanschmidt | 0 | 1 | 1 |
zhuhaozhe | 0 | 2 | 0 |
krzysztofjordan | 0 | 1 | 1 |
zhangfeiv0 | 0 | 1 | 1 |
nikonikolov | 0 | 1 | 1 |
clessig | 0 | 0 | 2 |
WeizhuoZhang-intel | 0 | 0 | 2 |
gau-nernst | 0 | 0 | 2 |
xinyu-intel | 0 | 0 | 2 |
albertz | 0 | 0 | 2 |
dvrogozh | 0 | 0 | 2 |
gilfree | 0 | 0 | 2 |
benbellick | 0 | 0 | 2 |
GitHub | 1 | 0 | 0 |
CaoE | 0 | 1 | 0 |
dsjohns2 | 0 | 1 | 0 |
v0lta | 0 | 1 | 0 |
siahuat0727 | 0 | 1 | 0 |
EikanWang | 0 | 1 | 0 |
ZzEeKkAa | 0 | 1 | 0 |
xytintel | 0 | 1 | 0 |
afrittoli | 0 | 1 | 0 |
frost-intel | 0 | 1 | 0 |
RabbitWhite1 | 0 | 1 | 0 |
khushi-411 | 0 | 1 | 0 |
zitongzhan | 0 | 1 | 0 |
janeyx99 | 0 | 1 | 0 |
naromero77amd | 0 | 1 | 0 |
Shan19900305 | 0 | 1 | 0 |
Stonepia | 0 | 1 | 0 |
mwlon | 0 | 1 | 0 |
sraikund16 | 0 | 1 | 0 |
MatzeB | 0 | 1 | 0 |
galv | 0 | 1 | 0 |
yaochengji | 0 | 1 | 0 |
yanboliang | 0 | 1 | 0 |
yangsiyu007 | 0 | 1 | 0 |
alugorey | 0 | 1 | 0 |
rlanday | 0 | 1 | 0 |
jainapurva | 0 | 1 | 0 |
sidt-meta | 0 | 1 | 0 |
jovianjaison | 0 | 1 | 0 |
wlei-llvm | 0 | 1 | 0 |
awayzjj | 0 | 1 | 0 |
gag1jain | 0 | 1 | 0 |
ankurneog | 0 | 1 | 0 |
DenisVieriu97 | 0 | 1 | 0 |
alexcdennis | 0 | 1 | 0 |
c-p-i-o | 0 | 1 | 0 |
sdingcn | 0 | 1 | 0 |
drewfustin | 0 | 1 | 0 |
tchaikov | 0 | 1 | 0 |
sanketpurandare | 0 | 1 | 0 |
zejun-chen | 0 | 1 | 0 |
shengbao-zheng | 0 | 1 | 0 |
ahmadsarvmeily | 0 | 1 | 0 |
egienvalue | 0 | 1 | 0 |
dan-jacobson | 0 | 1 | 0 |
soumith | 0 | 1 | 0 |
cchan | 0 | 1 | 0 |
DellCurry | 0 | 1 | 0 |
Ryo-not-rio | 0 | 1 | 0 |
nmacchioni | 0 | 1 | 0 |
frostedoyster | 0 | 1 | 0 |
charlie-wt | 0 | 1 | 0 |
FindHao | 0 | 1 | 0 |
hongxiayang | 0 | 1 | 0 |
lessw2020 | 0 | 1 | 0 |
adriaorenstein | 0 | 1 | 0 |
fengyuan14 | 0 | 1 | 0 |
jerrymannil | 0 | 1 | 0 |
MaggieMoss | 0 | 1 | 0 |
zixi-qi | 0 | 1 | 0 |
pragupta | 0 | 1 | 0 |
Theo-Cheynel | 0 | 1 | 0 |
zertosh | 0 | 1 | 0 |
m1guelperez | 0 | 1 | 0 |
adhithadias | 0 | 1 | 0 |
chuanhaozhuge | 0 | 1 | 0 |
kirtiteja | 0 | 1 | 0 |
uniartisan | 0 | 1 | 0 |
trixirt | 0 | 1 | 0 |
sayakpaul | 0 | 0 | 1 |
beratuna | 0 | 0 | 1 |
zhouzaida | 0 | 0 | 1 |
yuanyao-nv | 0 | 0 | 1 |
zezhang | 0 | 0 | 1 |
Abhishekghosh1998 | 0 | 0 | 1 |
wbigat | 0 | 0 | 1 |
ani300 | 0 | 0 | 1 |
youkaichao | 0 | 0 | 1 |
BioGeek | 0 | 0 | 1 |
battaglia01 | 0 | 0 | 1 |
rgommers | 0 | 0 | 1 |
Qinlong275 | 0 | 0 | 1 |
wangjiangben-hw | 0 | 0 | 1 |
AdrienCourtois | 0 | 0 | 1 |
sidijju | 0 | 0 | 1 |
blaine-rister | 0 | 0 | 1 |
Coderx7 | 0 | 0 | 1 |
abcamiletto | 0 | 0 | 1 |
Zzv213 | 0 | 0 | 1 |
optstat | 0 | 0 | 1 |
UnbearableFate | 0 | 0 | 1 |
airsplay | 0 | 0 | 1 |
PeterSH6 | 0 | 0 | 1 |
rohitdwivedula | 0 | 0 | 1 |
aabtop | 0 | 0 | 1 |
zhaohm14 | 0 | 0 | 1 |
accelerate321 | 0 | 0 | 1 |
SalmanMohammadi | 0 | 0 | 1 |
jamied157 | 0 | 0 | 1 |
yezhengmao1 | 0 | 0 | 1 |
fxmarty | 0 | 0 | 1 |
ausstein | 0 | 0 | 1 |
jakelevi1996 | 0 | 0 | 1 |
BalancedTernary | 0 | 0 | 1 |
xwang233 | 0 | 0 | 1 |
njzjz | 0 | 0 | 1 |
asglover | 0 | 0 | 1 |
chadeos | 0 | 0 | 1 |
dilililiwhy | 0 | 0 | 1 |
mayank31398 | 0 | 0 | 1 |
AlexanderDokuchaev | 0 | 0 | 1 |
leitian | 0 | 0 | 1 |
kangchengX | 0 | 0 | 1 |
Tim-Salzmann | 0 | 0 | 1 |
ivodopyanov | 0 | 0 | 1 |
Hongjie1Chu | 0 | 0 | 1 |
platers | 0 | 0 | 1 |
jithunnair-amd | 0 | 0 | 1 |
embg | 0 | 0 | 1 |
coogle | 0 | 0 | 1 |
tingyangk | 0 | 0 | 1 |
carmocca | 0 | 0 | 1 |
northfun | 0 | 0 | 1 |
nbqu | 0 | 0 | 1 |
wangkl2 | 0 | 0 | 1 |
sujuyu | 0 | 0 | 1 |
ben-da6 | 0 | 0 | 1 |
biuq | 0 | 0 | 1 |
tsengalb99 | 0 | 0 | 1 |
ojh31 | 0 | 0 | 1 |
bigmover | 0 | 0 | 1 |
ConnollyLeon | 0 | 0 | 1 |
mattiadg | 0 | 0 | 1 |
Giodiro | 0 | 0 | 1 |
david-stojanovski | 0 | 0 | 1 |
psandovalsegura | 0 | 0 | 1 |
JamesMBartlett | 0 | 0 | 1 |
EGanji | 0 | 0 | 1 |
aws-caijune | 0 | 0 | 1 |
Cztery | 0 | 0 | 1 |
staugust | 0 | 0 | 1 |
unsatisfying | 0 | 0 | 1 |
fjneumann | 0 | 0 | 1 |
fffelix-huang | 0 | 0 | 1 |
Ly-Lynn | 0 | 0 | 1 |
moghadas76 | 0 | 0 | 1 |
emosy | 0 | 0 | 1 |
Quoding | 0 | 0 | 1 |
ajindal1 | 0 | 0 | 1 |
urstrulyvishtan | 0 | 0 | 1 |
Gamer-Guy12 | 0 | 0 | 1 |
Picaloer | 0 | 0 | 1 |
jansel | 0 | 0 | 1 |
SanityRemnants | 0 | 0 | 1 |
emmaking-smith | 0 | 0 | 1 |
maruel | 0 | 0 | 1 |
svekars | 0 | 0 | 1 |
Badr-MOUFAD | 0 | 0 | 1 |
kentanabe | 0 | 0 | 1 |
martintmv-git | 0 | 0 | 1 |
seungjun-green | 0 | 0 | 1 |
dcaustin33 | 0 | 0 | 1 |
mfbalin | 0 | 0 | 1 |
Amir9663 | 0 | 0 | 1 |