Weekly GitHub Report for Xla - 2024-07-18 14:54:21
Weekly GitHub Report for Xla
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
I. Issues
1.1 Open Issues
Open Issues This Week: 7
Summarized Issues:
- ppc64le architecture support in XLA: This issue is about adding support for the ppc64le architecture to the XLA project by forking and maintaining a version of boringssl that includes ppc64le support. The support was previously removed and is not accepted by the boringssl maintainers. This necessitates a separate maintenance effort to ensure compatibility.
- CUDA custom call registration issues: Marking a CUDA custom call as command buffer-compatible has no effect because the current logic only considers registrations for the generic platform
gpu
. This results in the system ignoring custom calls registered specifically forCUDA
. The issue highlights a gap in the registration logic that needs addressing.
- CUDA backend configuration errors: This issue pertains to an error encountered when configuring and building a project with the CUDA backend. The system fails to find a registered compiler for the CUDA platform, resulting in a NOT_FOUND error. This indicates a problem in the setup or registration of the CUDA compiler.
- AOT-compiled computation crash: A crash occurs when attempting to run an AOT-compiled computation in XLA. The error indicates that the program shape is missing from the proto during deserialization. This suggests a problem in the serialization or deserialization process of the computation.
- Python dependencies in C++ workflows: Unexpectedly, numerous Python-related dependencies need to be added to the
WORKSPACE
file when running C++-only workflows with XLA. This suggests potential undesirable transitive dependencies on Python targets in the Bazel dependency tree. The issue highlights the need to clean up or isolate dependencies.
- Strange sharding behavior in parallelism: Unexpected and strange sharding behavior involving pipeline parallelism and tensor/data parallelism is observed, particularly around the Scatter operation. The user notes unexpected all-reduce and all-gather operations over seemingly random groups of workers instead of the expected tensor parallel (TP) group. This indicates a potential bug or misconfiguration in the sharding logic.
- No test targets found in CUDA environment: A user encounters an error where no test targets were found despite requesting testing while trying to run a specific GPU test in a CUDA environment using Bazelisk. This issue suggests a problem in the test discovery or configuration process.
1.2 Top 5 Active Issues:
We consider active issues to be issues that have generated much discussion in the issue's comments.
-
Build fails: This issue is about a build failure encountered when trying to build XLA from source for CPU on an Ubuntu 20.04 machine using Bazel 6.5.0 and Clang 17. The error indicates that a specific header file,
gpublas_lt_matmul_thunk.h
, cannot be found, despite being present in the directory.- The comments discuss various build errors and fixes, including issues with missing files and compilation errors. Contributors suggest different build commands, identify recent changes that may have caused the issues, and provide fixes through commits. The conversation also includes troubleshooting steps, such as updating dependencies and modifying build configurations, with multiple contributors confirming when specific errors are resolved.
- Number of comments: 43
-
How to use the PJRT C API?: This issue is about a user trying to use the PJRT C API to execute an HLO module but encountering difficulties with the instructions and implementation. The user has built the
pjrt_c_api_cpu
library and created a C++ file to load the HLO module but is unsure how to proceed further, especially regarding the use ofGetPjrtApi()
and whether to switch to a C++ wrapper.- The comments discuss various solutions and suggestions, including using Bazel instead of CMake, building a CPU shared library, and using the GPU plugin as a reference. Users share their experiences, provide code examples, and discuss the challenges of integrating PJRT with different build systems. The conversation also touches on the use of PJRT with multiple inputs, deserialization issues, and the potential for upstream contributions.
- Number of comments: 25
-
Save across containers: This issue is about the possibility of saving the optimized XLA for later loads to improve the warm-up time for models using XLA via TensorFlow on NVIDIA GPUs during inference. The user is inquiring if there is a roadmap for this feature as the current warm-up time is not ideal given that the model and GPU remain the same every time.
- The comments discuss various levels of caching supported, suggesting the use of full AOT compilation and specifying a compilation cache directory with TensorFlow flags. The user provides additional context about using
@tf.function
withjit_compile=True
and faces issues with dynamic shapes. There are mentions of a workaround involving persistent compilation cache and saving models, but issues arise with multiple@tf.function
calls. A minimal reproducible example is shared, and further suggestions include using specific TensorFlow flags to improve cache support. - Number of comments: 12
- The comments discuss various levels of caching supported, suggesting the use of full AOT compilation and specifying a compilation cache directory with TensorFlow flags. The user provides additional context about using
-
[xla:cpu] [xla:gpu] DotGeneral ignored in GetHloCostAnalysis FLOPs: This issue highlights a problem where the
dot_general
,einsum
,jnp.dot
, and similar operations in JAX return an estimated FLOP count of "-1.0" when usingGetHloCostAnalysis
, affecting both CPU and GPU computations. The problem seems to stem from how these operations are lowered or fused in XLA, leading to a loss of accurate FLOP count estimation.- The comments discuss whether the issue is a bug in JAX or XLA, with some leaning towards it being a JAX bug due to how XLA handles
dot
operations. There is a consensus that the problem arises from XLA lowering operations to custom calls, whichHloCostAnalysis
cannot analyze. Suggestions include potentially fixing this in XLA or JAX developing its own cost analysis. The discussion also touches on the usefulness of accurate FLOP counts for debugging and optimization, and the possibility of adding cost analysis metadata to custom calls. - Number of comments: 12
- The comments discuss whether the issue is a bug in JAX or XLA, with some leaning towards it being a JAX bug due to how XLA handles
-
Build fails with ROCm on Gentoo Linux: This issue involves a developer attempting to build the
xla
extension with ROCm support on Gentoo Linux for use with the Elixir language, encountering compilation failures despite having installed all required dependencies. The developer is seeking assistance from the community to resolve the issue without resorting to workarounds like using Docker, as they prefer to understand and fix the root cause.- The comments discuss various troubleshooting steps, including setting up the correct GCC environment, addressing symbolic link issues, and attempting to resolve linker plugin errors. Despite these efforts, the issue persists, and other users report similar problems on different systems, indicating a broader compatibility issue with ROCm builds.
- Number of comments: 12
1.3 Top 5 Quiet Issues:
We consider quiet issues to be issues that have been opened in this project for the longest time. The team should work together to get these issues resolved and closed as soon as possible.
-
Profiling crashes when run using JAX: This issue describes a problem where setting the
xla_hlo_profile
flag causes JAX programs to crash when running on a CPU, resulting in a segmentation fault. The provided code snippet and output indicate that the crash occurs specifically on line 4 when attempting to execute the program.- Open for 362 days, 20 hours, 13 minutes
-
[XLA:GPU] Run HLO verifier more often: This issue highlights the need to run the HLO verifier more frequently within the XLA/GPU optimization pipeline. Specifically, it suggests running the verifier at least once more on the final optimized HLO and potentially during intermediate steps, provided it does not significantly impact compile time.
- Open for 357 days, 01 hours, 39 minutes
-
hlo_live_range reports assertion failure when running an HLO module file through run_hlo_module or multihost_hlo_runner/hlo_runner_main: This issue reports an assertion failure in the
hlo_live_range
utility when running an HLO module file through therun_hlo_module
ormultihost_hlo_runner/hlo_runner_main
tools. The error message indicates a failed check within thehlo_live_range.cc
file, specifically related to asynchronous instruction handling.- Open for 349 days, 03 hours, 34 minutes
-
[Q] List of High Level Operations?: This issue is a query about the availability of a comprehensive document listing all high-level operations in XLA, as the existing resources appear outdated or incomplete. The user suggests that the file
xla_builder.h
might be the most accurate source but notes the difficulty in extracting a unique list of operations from it.- Open for 329 days, 23 hours, 26 minutes
-
Reducing copy operators inserted by copy-insertion pass: This issue addresses the significant runtime and memory costs introduced by the copy-insertion pass in some networks, which can be optimized to reduce the number of copies. The proposed solutions include adding control dependencies between operators to ensure correct scheduling and introducing a new optimization pass to remove unnecessary copies after the scheduling pass.
- Open for 324 days, 18 hours, 27 minutes
1.4 Closed Issues
Closed Issues This Week: 2
Average Issue Close Time (This Week): 51.49 days
Summarized Issues:
- Resharding Costs in XLA Auto-Sharding: This issue involves a question about the computation of resharding costs for the
reshape
operation in the XLA auto-sharding code. It specifically addresses why the communication cost is zero despite anall_gather
operation being required. The discussion aims to clarify the cost computation and its implications on performance.
- All Gather Combiner Pass in XLA: This issue addresses the problem of the all gather combiner pass in the XLA project coalescing different data types. This behavior is not supported by all hardware, leading to potential compatibility issues. The suggestion is to refactor this behavior into a configurable option to enhance flexibility and hardware compatibility.
1.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open issues within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open issues from the past week.
II. Pull Requests
2.1 Open Pull Requests
Open Pull Requests This Week: 9
Pull Requests:
- ROCm integration and enhancements: Multiple pull requests focus on integrating and enhancing ROCm support within the project. One pull request introduces changes to integrate ROCm libraries for Spack, while another resolves an issue related to the Softmax function in the ROCm backend. These efforts collectively aim to improve the compatibility and performance of the project on ROCm platforms.
- Error handling and detection: A pull request introduces a mechanism to detect NCCL timeout errors and return appropriate error statuses for asynchronous events. This enhancement prevents program crashes and enables proper Python exception handling. It ensures more robust and reliable error management in the project.
- Compile method enhancements: Enhancements to the PjRtStreamExecutorClient::Compile method are introduced to ensure that argument and result layouts are propagated from MLIR code to the XLA compile options. This pull request also preserves these layouts during SPMD canonicalization after resharding. These changes aim to improve the accuracy and efficiency of the compilation process.
- Sharding and autotuning: A pull request aims to enable the sharding of autotuning by default in the project. This change is contingent upon another pull request by PatriosTheGreat. The goal is to enhance the performance and efficiency of the autotuning process.
- Unique channel IDs: A new pass is introduced to enforce unique channel IDs after various transformations and optimizations in XLA. This pull request addresses issue #14600. Ensuring unique channel IDs helps maintain the integrity and correctness of the transformations.
- Gloo support for MacOS: A pull request aims to introduce Gloo support for MacOS by utilizing libuv as the transport mechanism. This provides an alternative to a previous pull request and enhances the project's compatibility with MacOS. The change aims to broaden the platform support for the project.
- Performance optimization in Triton kernel emission: A pull request aims to enhance performance by skipping the final part of the Triton kernel emission during deserialization from the cache. This optimization can potentially reduce deserialization time by 0.5-1 seconds for larger Pallas kernels. The goal is to improve the overall efficiency of the deserialization process.
- Data transfer optimization in collective operations: A new pass is introduced to optimize data transfer in collective operations such as all-gather, all-to-all, collective-broadcast, and collective-permute. This optimization incorporates subsequent quantization or conversion to a narrower data type. The aim is to enhance the efficiency and performance of collective operations.
2.2 Closed Pull Requests
Closed Pull Requests This Week: 24
Summarized Pull Requests:
- SYCL platform integration: This topic covers the integration of the SYCL platform into the XLA:GPU component. The pull requests aim to register a Python callback on the SYCL platform and integrate SYCL as a subset of broader changes. These changes are essential for expanding the compatibility and functionality of the XLA:GPU component.
- AMD GPU support: This topic focuses on enabling specific features and optimizations for AMD GPUs. The pull requests aim to enable dot algorithms and the Triton feature in XLA for ROCm. These changes are crucial for improving performance and compatibility with AMD hardware.
- Future hardware compatibility: This topic addresses modifications needed for future hardware compatibility. The pull requests ensure that the codebase remains compatible with upcoming hardware changes. These updates are necessary to maintain the project's relevance and functionality with new hardware.
- Low-spec GPU handling: This topic deals with handling low-spec GPUs by skipping tests that require large shared memory. The pull request aims to improve the testing process for GPUs with limited resources. This change helps in avoiding unnecessary test failures on low-spec hardware.
- All-gather operation control: This topic introduces an option to prevent the combination of all-gather operations with different data types. The pull request allows users to set
combine_different_dtypes=false
while maintaining the existing behavior by default. This change provides more control over data type handling in all-gather operations.
- Transpose operation optimization: This topic focuses on optimizing the performance of the transpose operation in the MLIR emitter. The pull request adjusts the block size based on the register count to reduce high register pressure. This optimization improves execution efficiency.
- Deterministic iteration order: This topic addresses the need for a deterministic iteration order in the
StableHashMap
implementation. The pull request replaces::absl::flat_hash_map
with::absl::btree_map
and includes new tests to validate the change. This update ensures consistent iteration order.
- Deterministic fingerprint generation: This topic introduces methods to generate a deterministic fingerprint for backend configurations. The pull request encodes configurations as deterministic binary serialized strings. This change addresses the issue of non-deterministic order in proto maps.
- Testing replicated HLO modules: This topic adds functions for testing replicated HLO modules. The pull request introduces
RunAndCompareTwoModulesReplicated
andCompareInputs
toHloTestBase
. These functions enhance the testing capabilities for replicated modules.
- Call graph flattening: This topic addresses the need to flatten the call graph after the collective pipeliner pass. The pull request ensures call sites are unique by relocating the collective pipeliner pass. This change is crucial for maintaining unique call sites.
- GPU device information test: This topic reintroduces the GPU device information test. The pull request corrects the associated specification files. This update ensures accurate testing of GPU device information.
- SPMD configuration for gather/scatter: This topic introduces a new SPMD configuration option for gather/scatter operations. The pull request specifies a zero-cost method and includes formatting fixes. This change maintains the existing IndexParallel strategy by default.
- Debug flag for HLO dumping: This topic introduces a debug flag for controlling HLO dumping and NVTX marker naming. The pull request addresses inconsistencies and provides flexibility for debugging. This update is useful for debugging scenarios.
- DRAM size removal: This topic removes the DRAM size from the model string in the device description for GPUs. The pull request addresses the variability in DRAM size for the same GPU model. This change ensures accurate model identification.
- Cross memory pool access: This topic addresses enabling cross memory pool access for the
cuda_mallocasync
allocator. The pull request prevents memory faults for intra-node NCCL operators. This change is crucial for memory management in XLA:GPU.
- OSS build breakage fix: This topic addresses the OSS build breakage of
se_gpu_pjrt_client_test.cc
. The pull request adds aTF_
prefix toASSERT_OK
in the tests. This update resolves the build break issue.
- ROCm build break fix: This topic addresses a build break issue for the ROCm platform. The pull request resolves the issue introduced in commit 2cc8aba. This change ensures successful builds on the ROCm platform.
- GEMM algorithm picker refactor: This topic involves refactoring the
gemm_algorithm_picker/test
. The pull request integrates precision settings and simplifies the associated test. This update improves the test's efficiency and readability.
- Buffer comparator refactor: This topic involves refactoring the
buffer_comparator
. The pull request adds parameters to improve code readability and addresses previous review issues. This change enhances the code's maintainability.
- ROCm environment fix: This topic addresses an issue related to the undefined
__oclc_ABI_version
symbol in the ROCm environment. The pull request resolves the issue as indicated by its title. This change ensures compatibility with the ROCm environment.
- BinaryMap cleanup: This topic involves cleaning up the definition of BinaryMap in the GPU component. The pull request addresses the cleanup as indicated by its title. This update improves the code's clarity and organization.
- Unique channel IDs: This topic introduces a new pass to enforce unique channel IDs. The pull request ensures uniqueness after various transformations and optimizations. This change is essential for maintaining unique channel IDs in XLA.
2.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open pull requests within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open pull requests from the past week.
III. Commits
3.1 Commits
Commits This Week: 184
Summarized Commits:
- Code Refactoring and Cleanup: Several commits focus on refactoring and cleaning up the codebase to improve readability, maintainability, and performance. This includes removing unused methods, simplifying functions, and updating comments for clarity (e.g., commits 9, 16, 26, 57, 75, 131, 135, 139).
- Bug Fixes and Issue Resolutions: Numerous commits address specific bugs and issues within the project, such as fixing memory sanitizer errors, addressing performance regressions, and resolving test failures (e.g., commits 5, 21, 23, 41, 52, 72, 77, 79, 152).
- Reverting Changes: Some commits involve reverting previous changes that introduced issues or were deemed unnecessary, ensuring the stability and correctness of the codebase (e.g., commits 3, 4, 22, 56, 71, 176).
- Performance Improvements: Various commits aim to optimize performance, such as caching frequently accessed data, optimizing indexing maps, and improving the efficiency of specific operations (e.g., commits 8, 36, 109, 138, 154).
- Feature Enhancements: Several commits introduce new features or enhance existing ones, such as adding support for stateful FFI handlers, enabling new MLIR emitters, and improving error messages (e.g., commits 7, 12, 17, 48, 94, 136).
- Test and Benchmark Updates: Many commits focus on updating, adding, or splitting tests and benchmarks to ensure comprehensive coverage and maintainability (e.g., commits 6, 24, 29, 62, 64, 181).
- Code Organization: Some commits involve reorganizing code to improve structure and manageability, such as relocating files and separating components into distinct directories (e.g., commits 2, 15, 35, 76).
- LLVM and Triton Integrations: Several commits integrate updates from the LLVM and Triton projects, ensuring compatibility and leveraging new features (e.g., commits 11, 40, 49, 160).
- Configuration and Build System Updates: Various commits update configuration settings and build systems to streamline development and ensure compatibility across different environments (e.g., commits 27, 69, 87, 120).
- Error Handling and Debugging: Some commits enhance error handling and debugging capabilities, such as improving error messages and adding debug flags (e.g., commits 13, 19, 119).
- GPU and TPU Specific Changes: Several commits focus on improvements and fixes specific to GPU and TPU backends, such as optimizing memory space assignments and enhancing GPU dialects (e.g., commits 14, 18, 33, 81).
- SPMD and Sharding Enhancements: Various commits enhance the SPMD (Single Program Multiple Data) framework and sharding capabilities, improving performance and flexibility (e.g., commits 54, 88, 128, 140).
- Foreign Function Interface (FFI) Enhancements: Several commits introduce and enhance support for FFI handlers, enabling more flexible and powerful integrations (e.g., commits 95, 96, 97).
- Automated Code Changes: Numerous commits involve automated code changes to maintain consistency and adhere to coding standards (e.g., commits 50, 86, 123, 161).
- Memory Management Improvements: Some commits focus on improving memory management, such as enabling asynchronous memory copies and optimizing memory space usage (e.g., commits 65, 66, 100).
- Index and Shape Handling: Various commits enhance the handling of indices and shapes, ensuring correctness and improving performance (e.g., commits 92, 93, 147).
- Error and Warning Fixes: Several commits address specific errors and warnings, ensuring compatibility and preventing potential issues (e.g., commits 116, 118, 156).
- Compatibility Updates: Some commits ensure compatibility with different platforms and environments, such as addressing MacOS-specific issues and updating dependencies (e.g., commits 87, 114, 167).
- New Functionalities: Several commits introduce new functionalities, such as new sorting implementations and support for new operations (e.g., commits 31, 60, 183).
- Documentation and Comment Updates: Various commits update documentation and comments to improve clarity and provide better guidance for developers (e.g., commits 59, 122).
- Testing Framework Enhancements: Some commits enhance the testing framework, adding new test utilities and improving existing ones (e.g., commits 83, 151).
- Shardy Component Updates: Several commits update the Shardy component, ensuring it remains up-to-date and functional (e.g., commits 70, 89, 113, 129, 146).
- Integration and Synchronization: Various commits focus on integrating changes from different branches and ensuring synchronization across components (e.g., commits 45, 121, 178).
- Idempotency and Consistency: Some commits ensure idempotency and consistency in method behaviors, improving reliability and predictability (e.g., commits 63, 64, 142).
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, or created at least 1 pull request in the past month.
Contributor | Commits | Pull Requests | Issues |
---|---|---|---|
A. Unique TensorFlower | 169 | 0 | 0 |
Eugene Zhulenev | 91 | 0 | 0 |
Johannes Reifferscheid | 50 | 0 | 0 |
David Dunleavy | 40 | 0 | 0 |
Adrian Kuegel | 37 | 0 | 0 |
Benjamin Chetioui | 36 | 0 | 0 |
Kyle Lucke | 35 | 0 | 0 |
Oleg Shyshkov | 22 | 0 | 0 |
apivovarov | 0 | 14 | 3 |
Alexander Belyaev | 15 | 0 | 0 |
Alexander Pivovarov | 15 | 0 | 0 |
Dimitar (Mitko) Asenov | 14 | 0 | 0 |
Henning Becker | 14 | 0 | 0 |
Adam Banaś | 12 | 0 | 0 |
Benjamin Kramer | 11 | 0 | 0 |
jaro-sevcik | 0 | 11 | 0 |
sergey-kozub | 0 | 11 | 0 |
Sergey Kozub | 10 | 0 | 0 |
Kuy Mainwaring | 10 | 0 | 0 |
George Karpenkov | 10 | 0 | 0 |
Peter Hawkins | 9 | 0 | 0 |
Tongfei Guo | 9 | 0 | 0 |
Junwhan Ahn | 9 | 0 | 0 |
zoranjovanovic-ns | 4 | 5 | 0 |
shawnwang18 | 0 | 9 | 0 |
Goran Flegar | 8 | 0 | 0 |
mmakevic-amd | 4 | 4 | 0 |
sergachev | 0 | 8 | 0 |
Victor Stone | 7 | 0 | 0 |
Paweł Paruzel | 7 | 0 | 0 |
Tori Baker | 7 | 0 | 0 |
Ilia Sergachev | 7 | 0 | 0 |
Sandeep Dasgupta | 7 | 0 | 0 |
Jaroslav Sevcik | 7 | 0 | 0 |
Christian Sigg | 7 | 0 | 0 |
Alexander Lyashuk | 7 | 0 | 0 |
Bixia Zheng | 6 | 0 | 0 |
Shaogang Wang | 6 | 0 | 0 |
Zixuan Jiang | 6 | 0 | 0 |
David Majnemer | 6 | 0 | 0 |
Kevin Gleason | 6 | 0 | 0 |
Seher Ellis | 6 | 0 | 0 |
Hyeontaek Lim | 6 | 0 | 0 |
shraiysh | 0 | 6 | 0 |
pemeliya | 3 | 2 | 0 |
Shraiysh | 5 | 0 | 0 |
Greg Olechwierowicz | 5 | 0 | 0 |
akhilgoe | 3 | 2 | 0 |
Penporn Koanantakool | 5 | 0 | 0 |
lingzhi98 | 2 | 3 | 0 |
Shawn Wang | 5 | 0 | 0 |
Ionel Gog | 5 | 0 | 0 |
hsharsha | 0 | 5 | 0 |
ptoulme-aws | 0 | 4 | 1 |
Tom Natan | 4 | 0 | 0 |
terryysun | 2 | 2 | 0 |
Sergei Lebedev | 4 | 0 | 0 |
Farzin Houshmand | 4 | 0 | 0 |
Harsha H S | 4 | 0 | 0 |
Chi Zeng | 4 | 0 | 0 |
Tixxx | 0 | 3 | 1 |
Mehrdad Khani | 3 | 0 | 0 |
Gregory Pataky | 3 | 0 | 0 |
Gunhyun Park | 3 | 0 | 0 |
Dirk Hornung | 3 | 0 | 0 |
pizzud | 3 | 0 | 0 |
Mohammed Anany | 3 | 0 | 0 |
philipphack | 0 | 3 | 0 |
i-chaochen | 0 | 3 | 0 |
huhuiqi7 | 0 | 0 | 3 |
TJ Xu | 2 | 0 | 0 |
Patrick Toulme | 2 | 0 | 0 |
Leo Heinsaar | 2 | 0 | 0 |
Olli Lupton | 2 | 0 | 0 |
Anlun Xu | 2 | 0 | 0 |
Dan Foreman-Mackey | 2 | 0 | 0 |
Ruturaj Vaidya | 2 | 0 | 0 |
Tamás Danyluk | 2 | 0 | 0 |
Changhui Lin | 2 | 0 | 0 |
Shanbin Ke | 2 | 0 | 0 |
lausannel | 0 | 2 | 0 |
qGentry | 0 | 0 | 2 |
FatJhon | 0 | 0 | 2 |
joaospinto | 0 | 0 | 2 |
Jorge Gorbe Moya | 1 | 0 | 0 |
Kuangyuan Chen | 1 | 0 | 0 |
Krasimir Georgiev | 1 | 0 | 0 |
Bart Chrzaszcz | 1 | 0 | 0 |
Sheng Yang | 1 | 0 | 0 |
Theotime Combes | 1 | 0 | 0 |
Matt Miecnikowski | 1 | 0 | 0 |
Amit Sabne | 1 | 0 | 0 |
Zhan Lu | 1 | 0 | 0 |
Emilio Cota | 1 | 0 | 0 |
Ryan M. Lefever | 1 | 0 | 0 |
Gleb Pobudzey | 1 | 0 | 0 |
Dragan Mladjenovic | 1 | 0 | 0 |
Chao | 1 | 0 | 0 |
Michael Hudgins | 1 | 0 | 0 |
Vadym Matsishevskyi | 1 | 0 | 0 |
Yimei Sun | 1 | 0 | 0 |
Kanvi Khanna | 1 | 0 | 0 |
Abhinav Gunjal | 1 | 0 | 0 |
Clive Verghese | 1 | 0 | 0 |
Aliia Khasanova | 1 | 0 | 0 |
Ce Zheng | 1 | 0 | 0 |
Tomás Longeri | 1 | 0 | 0 |
Frédéric Bastien | 1 | 0 | 0 |
Derek Murray | 1 | 0 | 0 |
Yash Katariya | 1 | 0 | 0 |
buptzyb | 1 | 0 | 0 |
Chunxiang (Jake) Zheng | 1 | 0 | 0 |
Saran Tunyasuvunakool | 1 | 0 | 0 |
Jieying Luo | 1 | 0 | 0 |
Shu Wang | 1 | 0 | 0 |
Philipp Hack | 1 | 0 | 0 |
wenchenvincent | 1 | 0 | 0 |
Blake Hechtman | 1 | 0 | 0 |
Phuong Nguyen | 1 | 0 | 0 |
Jan | 1 | 0 | 0 |
Parker Schuh | 1 | 0 | 0 |
wenscarl | 0 | 1 | 0 |
janpfeifer | 0 | 1 | 0 |
olupton | 0 | 1 | 0 |
Cjkkkk | 0 | 1 | 0 |
dinodeep | 0 | 1 | 0 |
dimvar | 0 | 1 | 0 |
hmonishN | 0 | 1 | 0 |
ShengYang1 | 0 | 1 | 0 |
heiner | 0 | 1 | 0 |
nouiz | 0 | 0 | 1 |
Prashant-Jagtap | 0 | 0 | 1 |
othakkar | 0 | 0 | 1 |
mars1248 | 0 | 0 | 1 |
joelberkeley | 0 | 0 | 1 |
AleksKnezevic | 0 | 0 | 1 |
kranipa | 0 | 0 | 1 |
LeoneChen | 0 | 0 | 1 |
yanminghui123 | 0 | 0 | 1 |
bhuntsman | 0 | 0 | 1 |
andportnoy | 0 | 0 | 1 |