Weekly GitHub Report for Llama.cpp - 2024-07-17 21:23:15
Weekly GitHub Report for Llama.cpp
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
I. Issues
1.1 Open Issues
Open Issues This Week: 36
Summarized Issues:
- Missing Library Dependencies on Android: Issues have been reported where binaries compiled using the Android NDK on Linux fail to execute on Android Termux due to missing library dependencies, specifically "libllama.so". This problem affects the execution of scripts and binaries, leading to errors and failed runs. Users have identified this missing library as a critical dependency for running the Llava model on Android devices.
- Model Training and Precision: Users have inquired about the possibility of training models from scratch using f16 or q8 precision instead of f32. This feature is not currently available but is a goal for future development. The potential benefits include speeding up the training process on CPUs.
- File Execution Errors: Several issues involve errors when executing files after a successful build, such as "No such file or directory" errors. These problems were often resolved by identifying renamed tools or missing files. Such errors can disrupt workflows and require users to troubleshoot build outputs.
- Vulkan Backend Issues: The Vulkan backend has encountered multiple issues, including failures on specific GPUs and platforms. Problems include validation errors, memory allocation failures, and build failures due to missing executables. These issues affect the performance and usability of the Vulkan backend on various hardware configurations.
- Compilation Problems on Windows ARM64: Compilation issues have been reported with the
ggml-aarch64.c
file on Windows ARM64 using MSVC. Specific__asm__
directives fail to compile due to conditional checks that do not exclude MSVC. A workaround involves modifying these conditionals to ensure successful compilation.
- Model Conversion and Import Errors: Users have faced issues when converting and importing models, such as tensor shape mismatches and missing configuration files. These errors prevent successful model conversion and integration into the llama.cpp project.
- Performance and Compatibility Requests: There are multiple feature requests aimed at improving performance and compatibility, such as removing the cublas library dependency, supporting new models, and enhancing performance on specific backends. These requests aim to optimize the build size, add support for new architectures, and improve model performance.
- Runtime and Execution Errors: Various runtime errors have been reported, including issues with model loading, tensor shape mismatches, and unexpected outputs. These errors can cause models to fail during execution or produce incorrect results, impacting the reliability of the llama.cpp project.
- Documentation and Usability Improvements: Users have requested improvements in documentation and usability, such as adding a
--silent
flag, providing detailed release notes, and ensuring necessary packages are included in therequirements.txt
file. These improvements aim to enhance the user experience and streamline the setup process.
1.2 Top 5 Active Issues:
We consider active issues to be issues that have generated much discussion in the issue's comments.
-
server : improvements and maintenance: This issue is about improving and maintaining the server example in the GitHub project, which has grown in functionality but is currently unstable and missing important features. The issue aims to track these points and draw community attention to them, as some tasks are significant and require considerable effort to complete.
- The comments discuss various improvements and suggestions, including adding look-ahead decoding, contrastive search, speculative sampling, and function calling support. There are also discussions about refactoring the code, supporting multiple mount points for the OAI API, and handling KV cache overflow errors. Some comments suggest focusing on the server as the main deliverable, while others debate the implementation of chat templates and the use of Jinja2. The conversation also touches on the need for better error handling, prompt processing improvements, and the potential for using an async framework for the server.
- Number of comments: 108
-
Support BitNet b1.58 ternary models: This issue is about implementing support for BitNet b1.58 ternary models in the llama.cpp project. The BitNet b1.58 models use ternary values (1, 0, -1) and are shown to have performance increases over fp16 models, but they need to be trained in this ternary mode from the start.
- The comments discuss the potential benefits and challenges of implementing BitNet, including the need for new quantization methods, the feasibility of training and inference, and the availability of pre-trained models. There is also a discussion about the practicality of using FPGAs for ternary operations and the potential for future hardware optimizations.
- Number of comments: 88
-
Investigate gemma 2 generation quality: This issue is about investigating the quality of the Gemma 2 model generation in the llama.cpp project, with initial reports suggesting potential problems with the tokenizer and quantization. The discussion includes various tests and comparisons with other implementations, as well as suggestions for potential fixes and improvements.
- The comments discuss issues with hard-coded window sizes, tokenizer discrepancies, quantization errors, and potential fixes, including changes to the conversion code and tokenizer handling. There are also detailed reports of testing different configurations and their impact on model performance, with some users suggesting specific changes to improve results.
- Number of comments: 88
-
Support for Phi-3 models: This issue is about adding support for Microsoft's newly released Phi-3 models, which come in three variants: mini, small, and medium. The request is to integrate these models into the project, ensuring compatibility and functionality.
- The comments discuss various aspects of integrating Phi-3 models, including initial success with some models, issues with long context variants, and specific errors encountered during conversion. There are also discussions about the technical details of the models, such as the new "longrope" technique, and ongoing efforts to resolve these issues, including contributions from the community and references to relevant research papers.
- Number of comments: 83
-
Bug: QWEN2 quantization GGML_ASSERT: This issue involves a bug encountered when attempting to quantize the Qwen2 7B instruct model to IQ2_XS, resulting in a GGML_ASSERT error. The user also reports that the same error occurs when attempting to quantize to IQ2_S, and they are seeking assistance to debug the issue.
- The comments discuss various errors encountered during quantization, including NaN values in the imatrix, potential fixes involving precision adjustments, and the impact of different hardware and configurations. Users share their experiences, potential solutions, and workarounds, such as using flash attention or different quantization methods, to address the issues.
- Number of comments: 72
1.3 Top 5 Quiet Issues:
We consider quiet issues to be issues that have been opened in this project for the longest time. The team should work together to get these issues resolved and closed as soon as possible.
-
llama : add test for saving/loading sessions to the CI: This issue is about adding a test for saving and loading sessions to the continuous integration (CI) process of the llama project. It suggests examining the
save-load-state
example and incorporating a simple test into theci/run.sh
script.- Open for 336 days, 10 hours, 02 minutes
-
llama : tool for evaluating quantization results per layer: This issue proposes the development of a tool to evaluate quantization results per layer by comparing classical and quantum models using
ggml
exported graphs. The tool aims to provide detailed statistical information on intermediate results after each graph node to identify where precision is needed to minimize quantization differences.- Open for 327 days, 14 hours, 41 minutes
-
CUDA non-determinism on identical requests: This issue describes a problem with CUDA non-determinism where identical requests to a server's completion API yield different responses on the first attempt, but consistent responses on subsequent attempts. The problem appears to be related to caching and is not observed when using Metal offload or with certain configurations of CUDA offload.
- Open for 325 days, 07 hours, 05 minutes
-
Windows ROCm Build.: This issue involves a user attempting to compile the llama.cpp project for ROCm on a Windows system, encountering difficulties due to CMake's default paths for the Clang compiler not matching the Windows directory structure. The user reports that CMake is configured to use Unix-style paths for Clang, while on Windows, the Clang executables are located in "C:\Program Files\AMD\ROCm\5.5\bin\", leading to errors when trying to compile with Visual Studio.
- Open for 325 days, 05 hours, 28 minutes
-
Please support the also official Falcon-rw-1b and Falcon-rw-7b model variants: This issue requests support for the Falcon-RW-1B and Falcon-RW-7B model variants, which are official versions of the Falcon model series. The user has encountered errors when attempting to convert and quantize these models using the
convert-falcon-hf-to-gguf.py
script and is seeking assistance or confirmation on whether these models will be supported.- Open for 323 days, 16 hours, 53 minutes
1.4 Closed Issues
Closed Issues This Week: 54
Average Issue Close Time (This Week): 39.96 days
1.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open issues within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
- Can't run the program
- Toxicity Score: 0.55 (Frustration, condescending response, critical turn, suggestion to close issue)
- This GitHub conversation begins with mike2003 expressing frustration over a technical issue, which is met with a helpful suggestion from another user. However, mike2003's follow-up indicates that the suggestion did not resolve the problem, leading to a slightly condescending response from another user, suggesting that the issue is likely a path problem. The conversation continues with another user sharing a similar experience and offering a potential solution, which mike2003 responds to with a detailed comparison of outputs. The tone shifts as another user questions mike2003's CPU capabilities, leading to a brief technical exchange. The conversation takes a more critical turn when a user points out that mike2003 did not follow the issue template and suggests closing the issue, emphasizing the need for respect and effort in requests for volunteer support.
II. Pull Requests
2.1 Open Pull Requests
Open Pull Requests This Week: 19
Pull Requests:
- Ukrainian Tokens Addition: This pull request adds Ukrainian tokens into the string in the files
convert_hf_to_gguf.py
andconvert_hf_to_gguf_update.py
. It aims to enhance the language support of the project by including Ukrainian tokens. This update ensures that the project can handle Ukrainian text more effectively.
- BF16 Support in Metal Component: This pull request aims to add BF16 support to the metal component of the project. It includes pending tasks for MoE and Flash Attention. The pull request also features a self-reported review complexity assessment.
- CLI Tool Improvements: This pull request addresses several issues related to the CLI tool for generating Vulkan shaders. It adds prototypes for function definitions, inverts the logic of the
--no-clean
option, and provides a new help prompt with clear instructions.
- Public Headers Installation: This pull request ensures that all public headers, including
ggml-cuda.h
, are installed in the ggml build. It addresses the need for consistent header installation regardless of build settings, such as when HIPBLAS is used.
- Fallback Type Change: This pull request proposes changing the fallback type from
IQ4_NL
toQ4_0
. The change is due to the lack of implementation ofIQ4_NL
in some backends. This update aims to improve compatibility across different backends.
- Script Rewriting: This pull request involves rewriting the
pydantic_models_to_grammar_examples.py
script. The goal is to improve readability, configurability, and error handling. It also makes it easier to run individual tests and parse outputs.
- MMQ CUDA Code Deduplication: This pull request focuses on deduplicating the MMQ CUDA code and adding support for various i-quant formats. It excludes qi1_m and converts data to q8_0 or q8_1 formats for improved performance in certain tensor core kernels.
- AI Studio Documentation: This pull request adds AI Studio to the list of user interfaces in the documentation of the project. It aims to provide users with more options for interacting with the project. This update enhances the project's documentation.
- Function Call Tokens Display: This pull request addresses the issue of function call tokens not being displayed when using
llama-server
for internlm2. It references issue #8405. The update ensures that function call tokens are correctly displayed.
- Code Refactoring: This pull request involves refactoring the
llama
code by reorganizing various components and renaming functions. It prepares for upcoming API changes by moving implementations to new header and source files and updating theMakefile
dependencies.
- Lite-Mistral-Instruct Chat Template: This pull request adds support for the Lite-Mistral-Instruct chat template. It is used by the OuteAI/Lite-Mistral-150M-v2-Instruct model and features a specific format for system, user, and assistant messages.
- Mamba Model Simplification: This pull request aims to simplify the Mamba model in the llama.cpp project. It introduces advanced batch splits, contiguous allocation of recurrent state slots, and various optimizations to the
ggml
operators and batch handling mechanisms.
- Maximum Layers Increase: This pull request aims to address issue #8528 by increasing the maximum number of layers in the llama project from 256 to 512. It ensures that the project can handle more complex models. This update is crucial for scalability.
- Windows Snapdragon X Support: This pull request addresses improvements for running the project on Windows with Snapdragon X. It adds documentation for building on Windows, especially for ARM, and fixes issues related to MSVC's lack of support for C in-line assembly for ARM.
- README Update for CMake: This pull request updates the README.md file to include steps for running CMake. It indicates a self-reported low review complexity. This update aims to make the build process clearer for new users.
- FlashAttention Support: This pull request adds FlashAttention support for Gemma 2 on the CPU and CUDA backends. It introduces a new parameter to control the logit softcap, affecting the scaling and application of the tangens hyperbolicus prior to the softmax operation, and includes template instances for specific head sizes.
- Chameleon Model Support: This pull request adds support for the Chameleon model, enabling text-to-text inference. It lays the groundwork for future implementations of image-to-text, text-to-image, and interleaved pipelines, while noting the need for potential changes to the CLI and internal architecture for full functionality.
- SSM Metal Kernels: This pull request aims to introduce SSM Metal kernels to the ggml project. It references the target issue #8526. This update is part of ongoing efforts to enhance the project's performance on Metal.
- Dot Product Calculation Fix: This pull request addresses an issue in the ggml library by ensuring that the dot product calculation for iq4_nl handles cases with an odd number of blocks correctly. It defaults to a pure C implementation for the last block and has been tested with AVX and AVX2, with potential implications for NEON and other architectures.
2.2 Closed Pull Requests
Closed Pull Requests This Week: 46
Summarized Pull Requests:
- Ascend NPU backend: This pull request introduces the Ascend NPU backend, leveraging Huawei's CANN architecture, to the project. It aims to integrate support for Ascend NPU, enhancing the project's compatibility with Huawei hardware. The implementation includes necessary adjustments to ensure smooth operation with the new backend.
- SYCL backend fixes: This pull request addresses a fix for the
mul_mat_id
function specifically for the MOE (Mixture of Experts) in the SYCL backend of the project. It also resolves issues related to themul_mat_id
unit tests in the SYCL component. Additionally, it adds functionality for concatenation through dimensions 1 and 2 in the SYCL component.
- Tokenization updates: This pull request involves updating the
convert-hf-to-gguf-update.py
script to include Ukrainian tokens in its string processing. It also addresses the issue of pre-tokenizing non-special added tokens to ensure compatibility with different tokenizers and fixes the tokenization of HTML tags. Additionally, it introduces a--no-parse-special
option to the tokenizer to simplify the explanation of how theparse_special
setting impacts tokenization.
- Performance optimizations: This pull request focuses on optimizing the performance of CDNA on the ROCm backend, achieving up to a 10x increase in speed for specific models. It also updates the helper headers to utilize dp4a PTX intrinsics for the Nvidia backend, resulting in significant performance improvements. Additionally, it optimizes the performance of the MMQ code for CUDA, refactors it to handle different quantization formats, and prepares the codebase for future integration with i-quants.
- Vulkan shaders and CMake integration: This pull request integrates CMake build targets for Vulkan shaders, ensuring
ggml-vulkan-shaders.hpp
is generated at build time. It also updates the relocatable CMake package to link against the newggml
library. Additionally, it fixes a missing preprocessor parameter in the Vulkan shaders.
- LoRA adapter support: This pull request refactors the LoRA adapter support by introducing a new API for hot-swapping LoRA adapters. It adds a
struct llama_lora_adapter
to manage loaded adapters and ensures proper support for the GGUF format. It also reintroduces the PEFT to GGUF conversion script.
- Warnings and compile issues: This pull request addresses the suppression of a 'noreturn' warning in the
no_device_code
function withincommon.cuh
. It also addresses compile warnings by replacing instances ofsprintf
withsnprintf
in the examples. Additionally, it resolves warnings encountered during the Docker build process for the project.
- Backend and build options: This pull request adds NVPL BLAS support to the project by introducing
GGML_NVPL
as a build option in the makefile. It also introduces a GGML_USE_SVE macro to disable SVE by default, addressing issues with 256-bit operations on SVE 128-bit CPUs. Additionally, it addresses an issue on the master branch where the compile definitionsGGML_CUDA_FORCE_MMQ
andGGML_CUDA_FORCE_CUBLAS
were not being correctly applied when using HIP.
- Script and model updates: This pull request addresses the issue of a filename change of the script from
convert-hf-to-gguf.py
toconvert_hf_to_gguf.py
in thetools.sh
file. It also addresses an issue in the Gemma2 9B model by updating thequery_pre_attn_scalar
value from 224 to 256. Additionally, it improves the efficiency of theconvert_hf_to_gguf.py
script by utilizing the faster.get_slice(name)
method fromsafetensors
.
- Documentation and configuration: This pull request updates the README.md file to reflect the current output of the
llama-server --help
command. It also addresses the issue of broken links within the development documentation. Additionally, it updates theflake.lock
file in the project using an automated GitHub Action.
- Miscellaneous fixes: This pull request addresses the issue of directly using
__annotations__
in theexamples/pydantic_models_to_grammar.py
script. It also updates theclib.json
file to point to the original xxhash repository by Cyan4973. Additionally, it addresses a failure in the continuous integration process caused by a specific commit.
2.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open pull requests within the past week to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open pull requests from the past week.
III. Commits
3.1 Commits
Commits This Week: 42
Summarized Commits:
- Bug Fixes and Crash Resolutions: Several commits address and resolve various bugs and crashes, including issues with Fibonacci hashing, the 'n_predict' parameter, and the
--help
option for theexport-lora
command. These fixes ensure the stability and reliability of the project.
- Documentation Updates: Multiple commits focus on updating documentation, such as the README.md file to reflect current command outputs, correcting broken links in development documentation, and removing outdated information. These updates help maintain accurate and useful documentation for developers and users.
- Performance Improvements: Commits aimed at improving performance include making the
--dry-run
option faster in theconvert_hf
tool, optimizing CUDA MMQ with explicit q8_1 memory layouts, and addressing memory leaks in the lazy MoE conversion process. These changes enhance the efficiency and speed of the project.
- Code Refactoring and Cleanup: Several commits involve refactoring and cleaning up the codebase, such as vertically aligning elements, removing unused file types, and converting kernels to use templates. These efforts improve code readability and maintainability.
- Continuous Integration and Build Process: Commits addressing the CI process and build warnings include fixing CI issues, adding a macro guard to suppress warnings on Windows, and updating the Docker setup. These changes ensure a smoother and more reliable build and integration process.
- New Features and Enhancements: New features introduced include the Ascend NPU backend, the
--no-cont-batching
argument, and the--no-parse-special
option for tokenization. These additions expand the project's capabilities and provide more options for users.
- Library and Dependency Updates: Updates to libraries and dependencies include pointing
clib.json
to the original xxHash repository, updating the pydantic library for better compatibility, and changing the input 'nixpkgs' in the flake.lock file. These updates ensure the project uses the latest and most compatible versions of its dependencies.
- Tokenization and Pre-tokenization Improvements: Commits related to tokenization include handling non-special user-defined tokens, adjusting for various tokenizers, and resolving regex errors. These improvements enhance the accuracy and flexibility of the tokenization process.
- Backend and Hardware Support: Enhancements to backend and hardware support include adding NVPL BLAS support, integrating Vulkan into the CMake build system, and adding support for Sycl and OpenMP. These changes broaden the range of supported hardware and improve performance on different platforms.
- Error Handling and Warnings: Commits addressing error handling and warnings include printing an error message on empty input, suppressing warnings related to the 'noreturn' attribute, and updating the
_try_copy
lambda function to suppress a warning on Windows. These changes improve the robustness and clarity of the code.
- Configuration and Build System Updates: Updates to the configuration and build system include adding missing force for MMQ/cuBLAS for HIP, updating the .gitignore file, and fixing build errors in the Vulkan operation result checker. These changes streamline the build process and ensure proper configuration.
- Token and Parameter Adjustments: Commits adjusting tokens and parameters include removing the
fsep
token fromGPTRefactForCausalLM
, updating the query_pre_attn_scalar value for the llama model, and de-duplicating deepseek2 normalization. These adjustments ensure correct and efficient parameter usage.
- Server and API Enhancements: Enhancements to the server and API include handling content arrays in the chat API, ensuring consistent server batches, and updating examples to improve code safety. These changes improve the functionality and reliability of the server and API.
- Matrix Multiplication and SYCL Unit Tests: Commits related to matrix multiplication and SYCL unit tests include fixing parts of the
mul_mat_id
and skipping tests related to bfloat16. These changes ensure accurate and efficient matrix operations and testing.
- Attention Mechanism and Precision Updates: Updates to attention mechanisms and precision include using F32 precision in Qwen2 attention and removing the FA component. These changes improve the accuracy and performance of attention mechanisms.
- Hashing and UUID Updates: Commits related to hashing and UUIDs include adding SHA-256 hashing to the
gguf_hash.py
script and renaming a string from "UUIDv5" to "uuid". These updates enhance security and clarity in hashing and UUID usage.
- Logging and Debugging Enhancements: Enhancements to logging and debugging include adding logging for the Ascend NPU backend and making certain headers private. These changes improve the ability to monitor and debug the project.
- Macro and Directive Updates: Updates to macros and directives include replacing the
<BLASLIB>_ENABLE_CBLAS
directive withGGML_BLAS_USE_<BLASLIB>
and updating the __trap macro to reduce warnings. These changes ensure proper usage and reduce compilation warnings.
- File and Directory Management: Commits related to file and directory management include correcting the filename for the
convert-hf-to-gguf.py
script intools.sh
and excluding deprecated binary files from the repository. These changes ensure proper file management and organization.
- Compatibility and Support Fixes: Fixes for compatibility and support include addressing issues with Python versions 3.9 and 3.10 and ensuring compatibility with selected models in the LoRA adapter support. These changes ensure the project works smoothly across different environments and configurations.
- Normalization and Sampling Adjustments: Adjustments to normalization and sampling include removing duplicate entries in deepseek2 normalization and addressing the handling of non-embedding batches for sampled tokens. These changes improve the accuracy and efficiency of normalization and sampling processes.
- Conversion and Export Tools: Improvements to conversion and export tools include fixing memory leaks in the lazy MoE conversion process and ensuring the
--help
option for theexport-lora
command works correctly. These changes enhance the functionality and reliability of conversion and export tools.
- Template and Kernel Updates: Updates to templates and kernels include converting some kernels to use templates and making minor naming changes in the ggml project. These changes improve code consistency and maintainability.
- Special Token Handling: Improvements to special token handling include introducing the
--no-parse-special
option and addressing issues with non-special user-defined tokens. These changes enhance the flexibility and accuracy of token handling.
- Attention and Precision Mechanism Updates: Updates to attention mechanisms and precision include using F32 precision in Qwen2 attention and removing the FA component. These changes improve the accuracy and performance of attention mechanisms.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, or created at least 1 pull request in the past month.
Contributor | Commits | Pull Requests | Issues |
---|---|---|---|
Georgi Gerganov | 40 | 0 | 0 |
ggerganov | 0 | 34 | 1 |
slaren | 12 | 14 | 0 |
ngxson | 0 | 16 | 3 |
JohannesGaessler | 0 | 17 | 0 |
Xuan Son Nguyen | 16 | 0 | 0 |
Johannes Gäßler | 15 | 0 | 0 |
compilade | 7 | 8 | 0 |
0wwafa | 0 | 0 | 14 |
danbev | 0 | 13 | 0 |
Daniel Bevenius | 12 | 0 | 0 |
fairydreaming | 5 | 5 | 0 |
Clint Herron | 8 | 0 | 0 |
luoyu-intel | 4 | 4 | 0 |
ditsuke | 8 | 0 | 0 |
jukofyork | 2 | 4 | 1 |
HanClinto | 0 | 7 | 0 |
AidanBeltonS | 3 | 3 | 0 |
RunningLeon | 1 | 2 | 3 |
Olivier Chafik | 6 | 0 | 0 |
Brian | 5 | 0 | 0 |
HatsuneMikuUwU33 | 2 | 2 | 1 |
mofosyne | 0 | 5 | 0 |
ochafik | 0 | 5 | 0 |
standby24x7 | 2 | 2 | 0 |
jaime-m-p | 2 | 2 | 0 |
Sigbjørn Skjæret | 4 | 0 | 0 |
OuadiElfarouki | 0 | 4 | 0 |
criminact | 0 | 2 | 2 |
Alcpz | 0 | 4 | 0 |
iboB | 0 | 4 | 0 |
oldmanjk | 0 | 0 | 4 |
Meng, Hengyu | 3 | 0 | 0 |
0cc4m | 2 | 1 | 0 |
bandoti | 1 | 1 | 1 |
Douglas Hanley | 3 | 0 | 0 |
Borislav Stanimirov | 3 | 0 | 0 |
Alberto Cabrera Pérez | 3 | 0 | 0 |
b4b4o | 1 | 1 | 1 |
sasha0552 | 1 | 2 | 0 |
mdegans | 0 | 2 | 1 |
CISC | 0 | 3 | 0 |
airMeng | 0 | 3 | 0 |
maruel | 0 | 2 | 1 |
AndreasKunar | 0 | 1 | 2 |
RakshitAralimatti | 0 | 0 | 3 |
ghchris2021 | 0 | 0 | 3 |
sorasoras | 0 | 0 | 3 |
yli147 | 0 | 0 | 3 |
NikolaiLyssogor | 1 | 1 | 0 |
laik | 1 | 1 | 0 |
daghanerdonmez | 1 | 1 | 0 |
Kevin Wang | 2 | 0 | 0 |
toyer | 2 | 0 | 0 |
Daniele | 2 | 0 | 0 |
MistApproach | 1 | 1 | 0 |
iacore | 1 | 1 | 0 |
zhentaoyu | 1 | 1 | 0 |
pculliton | 1 | 1 | 0 |
loonerin | 1 | 1 | 0 |
Raj Hammeer Singh Hada | 2 | 0 | 0 |
kustaaya | 1 | 1 | 0 |
Eddie-Wang | 2 | 0 | 0 |
joecryptotoo | 1 | 1 | 0 |
0xspringtime | 1 | 1 | 0 |
ddh0 | 1 | 1 | 0 |
Hamdoud Hakem | 2 | 0 | 0 |
Michael de Gans | 2 | 0 | 0 |
Adriankhl | 0 | 1 | 1 |
youth123 | 0 | 2 | 0 |
matteoserva | 0 | 1 | 1 |
hamdoudhakem | 0 | 2 | 0 |
AragonerUA | 0 | 2 | 0 |
daniandtheweb | 0 | 2 | 0 |
iamlemec | 0 | 2 | 0 |
isaac-mcfadyen | 0 | 1 | 1 |
ZeusXuan | 0 | 2 | 0 |
jpodivin | 0 | 2 | 0 |
LDLINGLINGLING | 0 | 1 | 1 |
dspasyuk | 0 | 1 | 1 |
kevmo314 | 0 | 2 | 0 |
nicholaiTukanov | 0 | 1 | 1 |
msy-kato | 0 | 2 | 0 |
AmgadHasan | 0 | 1 | 1 |
amochkin | 0 | 1 | 1 |
liuda1980 | 0 | 0 | 2 |
uwu-420 | 0 | 0 | 2 |
cmp-nct | 0 | 0 | 2 |
Billzhong2022 | 0 | 0 | 2 |
takosalad | 0 | 0 | 2 |
kidoln | 0 | 0 | 2 |
Smupk2778 | 0 | 0 | 2 |
wangzi7654321 | 0 | 0 | 2 |
jygmysoul | 0 | 0 | 2 |
QIANXUNZDL123 | 0 | 0 | 2 |
ch1y0q | 0 | 0 | 2 |
SimplyCorbett | 0 | 0 | 2 |
Battlehub0x | 0 | 0 | 2 |
MathiasSchindler | 0 | 0 | 2 |
Sokartecnologi | 0 | 0 | 2 |
Al Mochkin | 1 | 0 | 0 |
hipudding | 1 | 0 | 0 |
Masaya, Kato | 1 | 0 | 0 |
Steve Bonds | 1 | 0 | 0 |
M-A | 1 | 0 | 0 |
Armen Kaleshian | 1 | 0 | 0 |
Jiří Podivín | 1 | 0 | 0 |
Chen Xi | 1 | 0 | 0 |
Nicholai Tukanov | 1 | 0 | 0 |
Dibakar Gope | 1 | 0 | 0 |
M. Yusuf Sarıgöz | 1 | 0 | 0 |
Andy Salerno | 1 | 0 | 0 |
John Balis | 1 | 0 | 0 |
Denis Spasyuk | 1 | 0 | 0 |
Alex Tuddenham | 1 | 0 | 0 |
Andy Tai | 1 | 0 | 0 |
Bjarke Viksøe | 1 | 0 | 0 |
Derrick T. Woolworth | 1 | 0 | 0 |
Natsu | 1 | 0 | 0 |
Ouadie EL FAROUKI | 1 | 0 | 0 |
Pieter Ouwerkerk | 1 | 0 | 0 |
Neo Zhang Jianyu | 1 | 0 | 0 |
Icecream95 | 1 | 0 | 0 |
Judd | 1 | 0 | 0 |
Faisal Zaghloul | 1 | 0 | 0 |
Mateusz Charytoniuk | 1 | 0 | 0 |
Roni | 1 | 0 | 0 |
Michael Francis | 1 | 0 | 0 |
Andrei | 1 | 0 | 0 |
Isaac McFadyen | 1 | 0 | 0 |
HanishKVC | 1 | 0 | 0 |
Christian Zhou-Zheng | 1 | 0 | 0 |
Yann Follet | 1 | 0 | 0 |
Aarni Koskela | 1 | 0 | 0 |
k.h.lai | 1 | 0 | 0 |
Eve | 1 | 0 | 0 |
Shuichi Tsutsumi | 1 | 0 | 0 |
jojorne | 1 | 0 | 0 |
Ulrich Drepper | 1 | 0 | 0 |
Frank Mai | 1 | 0 | 0 |
Abheek Gulati | 1 | 0 | 0 |
thxCode | 0 | 1 | 0 |
drepper | 0 | 1 | 0 |
Galunid | 0 | 1 | 0 |
abhishek-rn | 0 | 1 | 0 |
arthw | 0 | 1 | 0 |
rgerganov | 0 | 1 | 0 |
edude03 | 0 | 1 | 0 |
NickCrews | 0 | 1 | 0 |
netrunnereve | 0 | 1 | 0 |
ltoniazzi | 0 | 1 | 0 |
IMbackK | 0 | 1 | 0 |
Eddie-Wang1120 | 0 | 1 | 0 |
fmz | 0 | 1 | 0 |
katsu560 | 0 | 1 | 0 |
contentis | 0 | 1 | 0 |
joeatodd | 0 | 1 | 0 |
salaxieb | 0 | 1 | 0 |
mgroeber9110 | 0 | 1 | 0 |
abetlen | 0 | 1 | 0 |
AlexsCode | 0 | 1 | 0 |
Zor-X-L | 0 | 1 | 0 |
crashr | 0 | 1 | 0 |
hackingthekernel | 0 | 1 | 0 |
andy-tai | 0 | 1 | 0 |
mcharytoniuk | 0 | 1 | 0 |
Quantaindew | 0 | 1 | 0 |
foldl | 0 | 1 | 0 |
ho2103 | 0 | 1 | 0 |
hopto-dot | 0 | 1 | 0 |
akemimadoka | 0 | 1 | 0 |
NeoZhangJianyu | 0 | 1 | 0 |
dwoolworth | 0 | 1 | 0 |
pouwerkerk | 0 | 1 | 0 |
bviksoe | 0 | 1 | 0 |
mtasic85 | 0 | 1 | 0 |
diimdeep | 0 | 1 | 0 |
perpendicularai | 0 | 1 | 0 |
prfd | 0 | 1 | 0 |
brochure | 0 | 1 | 0 |
agray3 | 0 | 1 | 0 |
jdomke | 0 | 1 | 0 |
yeahdongcn | 0 | 1 | 0 |
andysalerno | 0 | 1 | 0 |
monatis | 0 | 1 | 0 |
zhipenghan | 0 | 1 | 0 |
ClarkChin08 | 0 | 1 | 0 |
kriation | 0 | 1 | 0 |
danielhanchen | 0 | 1 | 0 |
teleprint-me | 0 | 1 | 0 |
65a | 0 | 1 | 0 |
sbonds | 0 | 1 | 0 |
SommerEngineering | 0 | 1 | 0 |
amitj1jan | 0 | 1 | 0 |
nopperl | 0 | 1 | 0 |
lld1995 | 0 | 0 | 1 |
vecorro | 0 | 0 | 1 |
arch-btw | 0 | 0 | 1 |
richardanaya | 0 | 0 | 1 |
vt-alt | 0 | 0 | 1 |
farnazj | 0 | 0 | 1 |
anunknowperson | 0 | 0 | 1 |
JMPSequeira | 0 | 0 | 1 |
skoulik | 0 | 0 | 1 |
zhaoyuchen1128 | 0 | 0 | 1 |
Deputation | 0 | 0 | 1 |
Ther-nullptr | 0 | 0 | 1 |
mneedham | 0 | 0 | 1 |
Edw590 | 0 | 0 | 1 |
EverythingForAI | 0 | 0 | 1 |
cikkle | 0 | 0 | 1 |
marcingomulkiewicz | 0 | 0 | 1 |
mirekphd | 0 | 0 | 1 |
hnfong | 0 | 0 | 1 |
ffroquemartinez | 0 | 0 | 1 |
idekel | 0 | 0 | 1 |
nivibilla | 0 | 0 | 1 |
DerekJuba-NIST | 0 | 0 | 1 |
abgulati | 0 | 0 | 1 |
perp | 0 | 0 | 1 |
moqimoqidea | 0 | 0 | 1 |
thesyntaxinator | 0 | 0 | 1 |
SteelPh0enix | 0 | 0 | 1 |
justinsteven | 0 | 0 | 1 |
palindsay | 0 | 0 | 1 |
differentprogramming | 0 | 0 | 1 |
lcarrere | 0 | 0 | 1 |
MarsBlessed | 0 | 0 | 1 |
sreenivasraghavan71 | 0 | 0 | 1 |
Lookforworld | 0 | 0 | 1 |
nmandic78 | 0 | 0 | 1 |
Green-Sky | 0 | 0 | 1 |
eliranwong | 0 | 0 | 1 |
quarterturn | 0 | 0 | 1 |
rudiservo | 0 | 0 | 1 |
werruww | 0 | 0 | 1 |
unclemusclez | 0 | 0 | 1 |
JohnClaw | 0 | 0 | 1 |
micsthepick | 0 | 0 | 1 |
kherud | 0 | 0 | 1 |
duynt575 | 0 | 0 | 1 |
tomgm777 | 0 | 0 | 1 |
chiranko | 0 | 0 | 1 |
Gomez12 | 0 | 0 | 1 |
starP-W | 0 | 0 | 1 |
nathanodle | 0 | 0 | 1 |
tybalex | 0 | 0 | 1 |
akhilkapil | 0 | 0 | 1 |
LiquidGunay | 0 | 0 | 1 |
stduhpf | 0 | 0 | 1 |
mirek190 | 0 | 0 | 1 |
flatsiedatsie | 0 | 0 | 1 |
tihom77 | 0 | 0 | 1 |
lorihuang | 0 | 0 | 1 |
ctb111 | 0 | 0 | 1 |
aahouzi | 0 | 0 | 1 |
jim-plus | 0 | 0 | 1 |
Yan-Xiangjun | 0 | 0 | 1 |
josharian | 0 | 0 | 1 |
Aridbhdkkj | 0 | 0 | 1 |
AUTOMATIC1111 | 0 | 0 | 1 |
d-kleine | 0 | 0 | 1 |
warren-lei | 0 | 0 | 1 |
yancaoweidaode | 0 | 0 | 1 |
andreys42 | 0 | 0 | 1 |
gpacix | 0 | 0 | 1 |
guinmoon | 0 | 0 | 1 |
apresence | 0 | 0 | 1 |
kasrahabib | 0 | 0 | 1 |
Hardik-Choraria | 0 | 0 | 1 |
99991 | 0 | 0 | 1 |
Sakura4036 | 0 | 0 | 1 |
markat1 | 0 | 0 | 1 |
amakropoulos | 0 | 0 | 1 |
MeemeeLab | 0 | 0 | 1 |
joshknnd1982 | 0 | 0 | 1 |
sealad886 | 0 | 0 | 1 |
lin72h | 0 | 0 | 1 |
jie80219 | 0 | 0 | 1 |
Arashimu | 0 | 0 | 1 |
nne998 | 0 | 0 | 1 |
StatPan | 0 | 0 | 1 |
jeroen-mostert | 0 | 0 | 1 |
1cekrim | 0 | 0 | 1 |
bong-furiosa | 0 | 0 | 1 |
djain-fujitsu | 0 | 0 | 1 |
m828 | 0 | 0 | 1 |
Fulgurance | 0 | 0 | 1 |
VelocityRa | 0 | 0 | 1 |
bartowski1182 | 0 | 0 | 1 |
dafei2017 | 0 | 0 | 1 |
metal3d | 0 | 0 | 1 |
Emmanuel97460 | 0 | 0 | 1 |
vmarchenkoff | 0 | 0 | 1 |