Weekly GitHub Report for Llama.cpp: April 21, 2025 - April 28, 2025 (12:02:56)
Weekly GitHub Report for Llama.cpp
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is b4991
1.2 Version Information:
The version released on March 29, 2025, introduces key updates and changes, though specific details are not provided in the given information.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
Misc. bug: llama-cli (vulkan backend) output gibberish with old vulkan sdk: This issue involves a bug in the llama-cli tool when using the Vulkan backend with an outdated Vulkan SDK, resulting in gibberish output due to a mismatch in cooperative matrix support. The problem can be temporarily resolved by setting an environment variable, but the user suggests a more robust solution through a code patch to handle the cooperative matrix extension more effectively.
- The comments discuss a similar issue experienced by another user, who confirms that setting an environment variable resolves the gibberish output but significantly reduces performance. Further discussion reveals that the performance drop is due to certain shader constants being set to zero, skipping most compute logic. A suggestion is made to enable the VK_KHR_cooperative_matrix with the latest Vulkan header and compiler to potentially improve performance.
- Number of comments this week: 5
-
Compile bug: Vulkan Cross compile for arm64: This issue involves a compile bug encountered when attempting to cross-compile the llama.cpp project with Vulkan backend support for an arm64 architecture on an x86-64 Ubuntu system. The problem arises from the use of an x86 version of the glslc tool, which does not accurately reflect the features available on the target arm64 SoC GPU environment, prompting a request for a cross-compile check and additional build arguments.
- The comments discuss the platform independence of glslc and suggest using a newer version for more features. A user provides guidance on using Ubuntu's built-in GLSLC dependencies and suggests installing specific packages. There is a discussion about adding flags to disable specific Vulkan features, with a consensus that runtime checks on the device can handle feature support.
- Number of comments this week: 5
-
Feature Request: Tensor paralellism (--split-mode row) over rpc: This issue is a feature request to implement tensor parallelism over RPC in a GitHub project, specifically focusing on enabling the
--split-mode row
option to function effectively with the RPC server. The user is seeking guidance on how to extend the functionality of the RPC server to distribute tensor splits across different hosts for computation, with the goal of enhancing the performance of models in smaller setups like home labs or small enterprises.- The comments discuss the current implementation of tensor parallelism, suggesting a more flexible approach using a virtual backend that can handle multiple backends in parallel. The user expresses interest in collaborating more closely with the project team and is advised to start with the BLAS backend, modify it for matrix multiplication, and implement a buffer type for distributing matrix tiles across backends.
- Number of comments this week: 3
-
Eval: HIP: Llama-server multi-instance lockup: This issue involves a problem with the llama-server where threads working with GPUs are getting stuck in the libhsa-runtime, as indicated by a rocgdb backtrace. The issue appears to be related to a multi-instance lockup, and the user has provided detailed backtraces to help diagnose the problem.
- The initial comment suggests that the issue might not be a bug in llama.cpp and recommends reporting it to AMD, as the backtrace lacks debug symbols for ROCm components. The user acknowledges this advice and plans to investigate further, later providing additional backtraces with debug symbols for better analysis.
- Number of comments this week: 3
-
Eval bug: Flash Attention not working with NVIDIA GeForce RTX 4060 Ti: This issue reports a problem with the 'llama-simple' project where enabling flash attention on an NVIDIA GeForce RTX 4060 Ti results in a CUDA error related to illegal memory access, suggesting potential incompatibility or a bug with the hardware or software configuration. The user is experiencing the same error as a previously reported issue and is questioning whether the 4060 Ti supports flash attention, providing detailed log outputs to illustrate the problem.
- The first comment indicates that the issue could not be reproduced on an RTX 4090, suggesting a possible hardware-specific problem. The second comment notes that switching to a different model, Qwen2.5-0.5B-Instruct, resolves the error, raising the question of whether the issue is related to the specific model structure used.
- Number of comments this week: 2
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
- Kompute-based Vulkan backend shows an GGML_OP_GET_ROWS error: This issue pertains to a problem with the Kompute-based Vulkan backend, which is causing a GGML_OP_GET_ROWS error that does not occur with other Vulkan backends. The issue has been open for over a year, indicating a potentially complex problem that has yet to be resolved.
- Feature Request: Task Cancellation on Client Disconnection: This issue addresses the need for a feature that allows task cancellation in the server when a client disconnects, as the current system continues processing queued tasks even after a client has canceled their request, leading to inefficiencies and potential server overload. The proposed enhancement aims to terminate task processing upon request cancellation to prevent delays in subsequent requests and avoid server paralysis when a client makes numerous requests and then disconnects.
- Question: How to generate an MPS gputrace: This issue is about a user seeking guidance on how to generate a Metal Performance Shaders (MPS) gputrace for the llama.cpp project during model inference, as part of efforts to enhance the Metal backend for a related project. The user is specifically interested in obtaining a debugger output similar to what is provided by the Metal Debugger in Xcode, and is inquiring if there is any documented or known method to achieve this.
- common: download from URL, improve parallel download progress status: This issue addresses the need to improve the progress status display for parallel downloads when retrieving sharded models, as the current implementation causes conflicts in the progression indicators. The proposed solution involves properly implementing the
CURLOPT_NOPROGRESS
option to ensure accurate and non-conflicting progress updates during the download process. - Misc. bug: Data check in examples/gguf: This issue pertains to a bug in the
llama-gguf
tool, where an error occurs during the data check step when attempting to read a GGUF model, specifically when verifying the first element of the first tensor in the model. The problem arises because the code expects each element of the tensor to be equal to100
plus the tensor index, but the actual data does not meet this expectation, leading to a failed assertion and a core dump.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 27
Summarized Issues:
- Model Loading Errors: Issues with loading models in llama.cpp are prevalent, often due to missing components or incorrect configurations. For instance, the
bge-reranker-v2-gemma
model fails to load due to a missing SEP token, while the Moonlight-16B-A3B-Instruct model conversion fails due to a missing 'vocab' attribute in the tokenizer.
- Vulkan and GPU Compatibility Issues: Several issues arise from using Vulkan and specific GPUs, leading to performance discrepancies and errors. The
llama-cli
tool outputs gibberish with an older Vulkan SDK, and thellama-gemma3-cli
tool shows performance differences between Vulkan and CPU builds.
- Performance and Optimization Concerns: Users report slow performance and seek optimization advice for various setups. Gemma 3 QAT Models show slow token generation, and there are requests for tensor parallelism over RPC to improve performance.
- Compilation and Build Errors: Compilation issues are common, often due to missing files or incorrect configurations. Errors include a missing "ggml.h" file during CUDA builds and a failed CUDA initialization on specific GPUs.
- Server and Runtime Errors: The llama-server and related components face runtime issues, such as unresponsiveness due to memory errors and crashes from invalid commands. These issues highlight the need for better error handling and resource management.
- Feature Requests and Enhancements: Users request new features and improvements, such as key bindings in the web UI, support for new models, and a pure C API for mtmd functionality. These requests aim to enhance usability and extend the project's capabilities.
- Dependency and Environment Issues: Problems with dependencies and environment configurations are reported, such as a missing
PySide6
module ingguf-dump
and CI build failures due to disk space issues.
- Documentation and Usability Improvements: There are calls to update documentation and improve usability, such as replacing deprecated binaries in documentation and refactoring code for better organization.
- Model and Feature Bugs: Bugs in models and features, such as the EXAONE model's failure with quantized KV cache and the Flash Attention feature's malfunction, indicate the need for debugging and validation.
- Benchmarking and Testing: Users seek recommendations for benchmarking tools to evaluate GGUF models, indicating a need for standardized testing frameworks.
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 13
Summarized Issues:
- Bug in llama-embedding tool due to batch size and context size mismatch: This issue involves a bug in the llama-embedding tool where an assertion fails because the batch size parameter (n_batch) is smaller than the context size parameter (n_ctx). This problem occurs specifically when running a command with the llama-2-7b-chat model on a FreeBSD system, resulting in an abort trap.
- Failure on older x86 hardware due to AVX instructions: The llama software fails to run on older x86 hardware because it uses AVX vector instructions without checking for CPU support. This results in a segmentation fault when attempting to execute on CPUs lacking AVX capabilities.
- Server lockup and segmentation faults with multiple GPU instances: Running multiple instances of the llama.cpp server on the same GPU, specifically with HIP backend on AMD hardware, causes the server to lock up during prompt processing. The issue persists across different versions and is noted to cause segmentation faults on multiple GPUs using rocm 6.3.0.
- Low CPU utilization with --rpc option in llama-cli: A bug in the
llama-cli
module results in unexpectedly low CPU utilization when using the--rpc
option to distribute inference tasks across multiple CPU nodes. Only a few cores are actively used despite allocating all available CPUs, and a suggested fix involves modifying the RPC server code to configure the number of threads used by the CPU backend.
- Performance regression in llama-bench with Vulkan backend: There is a significant performance regression in the llama-bench tool when using the Vulkan backend with a forced integer dot product code path on NVIDIA GeForce RTX 4070. The performance dropped from 2899 tokens per second in build 5010 to 1935.29 tokens per second in build 5145, attributed to a bug fix in the initial implementation.
- Feature request for enhancement in llama.cpp: A feature request for the GitHub project "llama.cpp" aims to propose a new and useful enhancement. The user has ensured they are using the latest version, followed the README guidelines, and checked existing discussions, although specific details about the feature description, motivation, and possible implementation are not provided.
- Performance discrepancy between 16-core and 8-core CPUs: An unexpected performance discrepancy is observed where a 16-core CPU is slower than an 8-core CPU. This is attributed to the process being memory bandwidth-bound rather than CPU-bound, leading to inefficiencies and synchronization overhead when using more cores.
- CUDA compiler heap space issue with constexpr methods: Two static constexpr methods in the mmq.cuh file cause the CUDA compiler to run out of heap space due to cascaded question mark operators. A potential solution involves using switch statements, which are now permissible in constexpr functions with C++17, as opposed to the C++11 standard used when the code was originally written.
- Conversion script mistakenly included in build directory: The conversion script
convert_hf_to_gguf.py
is mistakenly included in the build'sbin/
directory on Windows when using MinGW. This leads users to believe it is ready to run, but it actually requires installation from the cloned repository rather than the default PyPI package, disrupting simple "pip install" workflows and causing confusion.
- Bug in model conversion script with missing parameter: A bug is encountered when converting the DeepSeek-R1-bf16 model to gguf format using the
convert_hf_to_gguf_update.py
script. The ktransformers framework fails to load the model correctly due to a missingkv_b
parameter, resulting in an error message displaying:1case
.
- Compilation problem with Rocm 6.4 and gfx1201 architecture: A compilation problem occurs with the
llama.cpp
project when using the latest release of Rocm 6.4, specifically when enabling flash attention with RocmWMMA on the gfx1201 architecture (RX 9070 XT). Errors arise due to a lack of full support for gfx12 in the currently released versions of rocWMMA, although a workaround using the git version of rocWMMA is mentioned.
- Unsupported operation causing SIGABRT on Apple CPUs: A bug in the llama.cpp project involves an unsupported operation "CPY" causing a SIGABRT error on Apple CPUs when running a specific command with the llama-cli module. This issue is potentially linked to the forced execution on the BLAS backend, as discussed in related closed issues and pull requests.
- Parsing error with JSON schema in llama-cli: A bug in the llama-cli module occurs when a JSON schema defining an array with a maximum length of zero generates a GBNF grammar that cannot be parsed. This results in a "failed to parse grammar" error when executed.
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 18
Key Open Pull Requests
1. [sync #10544] llama/ggml: add LLM training support: This pull request involves rebasing a previous pull request (#10544) to integrate LLM (Large Language Model) training support into the llama/ggml project, highlighting necessary changes for compatibility with an upcoming update (#12799), and includes various commits addressing improvements and fixes related to memory management, interface simplification, and model creation logic.
- URL: pull/13105
- Merged: No
- Associated Commits: a0e89, 0aa30, 54fa2, 6a1ce, e9c8e, a526b, 1c8d3, c2167, b5aa6, a0af7, efbd2, 28ad3, 79e23, 6c015, 6b851, fd432, d31e3, 45fd5, b7ee3, ebf09, b2fce
2. quantize: Handle user-defined pruning of whole layers (blocks): This pull request introduces a feature that allows users to prune entire layers of a model by specifying them in a comma-separated list via the --prune-layers
command line option, ensuring the remaining layers are renumbered to maintain sequence continuity, updating model metadata, and utilizing the correct importance score vector if available, while also enabling tensor/layer-wise quantization on selected tensor types, inspired by research on layer redundancy and building upon previous work.
- URL: pull/13037
- Merged: No
3. llama-bench: add -d
depth arg: This pull request introduces a new -d
or --n-depth
argument to the llama-bench
tool, allowing users to run tests with a prefilled KV cache context, and includes updates to the README and default parameters for faster execution.
- URL: pull/13096
- Merged: No
Other Open Pull Requests
- RPC Server Vulnerabilities: This topic addresses vulnerabilities in the
rpc-server
by enhancing input validation and error handling to prevent Denial of Service attacks. The pull request implements robust type and bounds checks, replacesabort()
calls with error logging, and ensures proper error propagation throughout the server's RPC command handling processes.
- ChatGLMModel Tokenizer Issue: The pull request resolves the "Cannot find tokenizer merges in model file" issue for the ChatGLMModel when loading models converted from HuggingFace. It includes verification steps for building, converting weights, and running inference, while noting a known issue with special token handling in
llama.cpp
.
- SYCL Unary Kernels Addition: This pull request adds missing unary kernels, such as absolute, ELU, and SGN, to the SYCL implementation. It changes the method for obtaining the index position of an element and includes several commits addressing kernel launch range decoupling and cleaning of auto-imported header files.
- PySide6 Dependency Removal: The pull request addresses an issue with the
gguf-dump
script incorrectly requiring the PySide6 module. It removes unnecessary files and updates thepyproject.toml
to reference the main functions of the scripts, while adding PySide6 as a dependency in*-extra
devShells for other contexts.
- Enum and Structure Optimization: This pull request optimizes the size and alignment of enums and structures to enhance performance on modern x64 platforms. It ensures compatibility with older compilers and systems without requiring the C23 standard, including changes to reduce the size of structures like
llama_model_params
.
- GLM4-0414 Model Template Fix: The pull request corrects the template for GLM4-0414 models, which used an incorrect legacy template resulting in a missing preamble. It eliminates the need for a workaround involving launching
llama-server
with a specific chat template and includes additional commits to fix spaces and remove redundant tokens.
- Cache-less Context for Embeddings: This pull request proposes changes to allow for a cache-less context when using embeddings-only models like BERT. It eliminates the need for a KV cache and includes commits that enable reranking with the
encode()
function and ensureencode()
clears theembd_seq
.
- Model Architecture Handling: The pull request enhances the handling of model architectures by ensuring correct mapping when
architectures
are specified in bothvision_config
andtext_config
. It includes JSON examples and commits that improve model architecture handling and removetrust_remote_code
.
- MUL_MAT_ID Operator Support: This pull request introduces support for the MUL_MAT_ID operator necessary for MOE models. It includes tests demonstrating its functionality on the CANN backend and removes Chinese comments from the code.
- Environment Variable Configuration: The pull request simplifies the configuration of environment variables for GGML_CANN_MEM_POOL and GGML_CANN_ASYNC_MODE. It introduces a priority queue-based memory pool option and allows various case-insensitive values to enable the asynchronous mode.
- Backend API for Tensor Data: This pull request proposes a new backend API for loading tensor data using precomputed hashes stored in the model KV. It aims to reduce model load times in distributed LLM inference scenarios by allowing the RPC backend to load data with specified hashes.
- GEMM SME Kernel Integration: The pull request integrates a new fp32 = bf16 X bf16 GEMM SME kernel into the KleidiAI system. It enables its use for KV-cache matrix multiplication operations by converting both the LHS and RHS to the bf16 format at runtime.
- README Update for TTS Example: This pull request updates the README.md file for the text-to-speech example to recommend using the 'afplay' command on MacOS. It addresses the issue that 'aplay' is not automatically installed on MacOS systems.
- Futex-based Yield Barrier: The pull request introduces a futex-based yield barrier to replace the original spin-based barrier in GGML. It enhances thread scheduling efficiency and overall system performance by allowing threads to yield instead of busy-waiting, improving scalability and power efficiency.
- Q4_K Layout Reorder Optimization: This pull request implements a reorder optimization for the Q4_K layout in the SYCL backend. It enhances performance metrics for various models and configurations, based on a previous branch by @Alcpz.
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 44
Key Closed Pull Requests
1. Synced with upstream.: This pull request involves multiple updates and renaming of files, including syncing with the upstream repository, modifying server and configuration files, and renaming README and Makefile documents, all signed off by Brad Hutchings, but it was ultimately not merged.
- URL: pull/13049
- Merged: No
- Associated Commits: a1ebb, d1959, 71c6a, a4f2d, 5e9c3, 56615, 30d11, 1ac6e, 6b286, 79397, 04b23, 33953, 9a6cb, 2cab5, 83c8b, d3354, 9713c, bc206, 35f9c, 5f96c, aa907, ab621, da3a9, db156, f2a4d, 74293, 582dc, 2cb14, b3653, 277d9, c0dfb, 871a9, 0899a, 7e2be, 06680, a217d, 94c59, e9d66, 6a44b, 6fe0e, 59707, 9b230, c543f, a99d9, 16853, 65dbe, 68355, 5a549, 92aa1, d5b9f, 30ff5, 6afea, 0442e, c13ce, 4f23a, f557d, a7cea, 3b8f0, dbb89, 5b92c, 9cfcb, ae5e9, b9635, 77bb3, 02101, e9043, 36c3f, a1f1d, 09f5c, 4fbe4, c44f9, 998d3, c79ab, 9e0a1, 0723a, 8d50e, 4367c, 604e0, 8c3ff, 444c4, ddeae, 93c9d, 783d0, b1f1b, 51e9d, f97b8, 32157, c2dee, 43bc5, c253e, dfe63
2. rebuild llama-ts: This pull request involves a series of commits aimed at rebuilding the "llama-ts" component of the project, focusing on enhancing parallel processing capabilities, fixing build errors, and improving tensor operations, although it was ultimately not merged.
- URL: pull/13039
- Merged: No
- Associated Commits: f472d, ef1c8, 2b4b2, f970a, 51e92, a155d, 37937, 18bbe, a667b, b52eb, 09313, 9cd41, 822a5, fa987, a7b37, bff5a, 45413, 80572, c9a37, 3ec47, f8350, 867c4, 5c285, e41e8, 08cdc, 3fa59, 70047, 46aae, 70f9d, c2b40, 63db2, 5bda4, cbc8a
3. metal : add memory pool for temp allocs: This pull request introduces a memory pool mechanism for temporary allocations in the Metal backend of the ggml project, aiming to efficiently manage intermediate buffers for operations like convolution and data rearrangement, similar to the CUDA backend's functionality, by implementing features such as dynamic heap resizing and buffer reuse, while also addressing tasks like buffer release and memory leak checks.
- URL: pull/12850
- Merged: 2025-04-22T13:15:51Z
Other Closed Pull Requests
- GLM4Z Model Enhancements: This topic covers improvements to the
convert_hf_togguf.py
script for the GLM4Z Model, addressing issues like the half rope problem, multi-EOS issues, and the GGGG output problem. The pull requests aim to enhance the efficiency and maintainability of the codebase by leveraging existing architecture.
- Pixtral 12B Model Support: The pull requests introduce support for the Pixtral 12B model in the llama.cpp project, including conversion to the GGUF format and implementation of 2D-RoPE. They also address issues with GPU backends, specifically CUDA and Metal, for the Pixtral model.
- Vision Model CLI Unification: These pull requests merge the command-line interfaces of various vision models into a unified
llama-mtmd-cli
, while addressing support for different model sizes and templates. They also rework the SIGINT handler for controlled interruption and stdout flushing.
- SmolVLM Model Integration: The pull requests add support for SmolVLM model versions 1 and 2, optimized for vision tasks like OCR and object detection. They integrate pre-quantized models from Hugging Face and provide conversion scripts for AI camera applications.
- CMake Configuration and Compilation Fixes: These pull requests modify the CMake configuration for
libllama
by adjusting include paths and relocating example files. They also address a compilation issue in the ggml-opencl_mm.cl file by removing a problematic macro definition.
- Performance Metrics and Enhancements: The pull requests introduce new performance metrics to the
llama-bench
tool and enhance the CUDA backend for better performance. They also involve manual tuning for AMD GCN architecture, resulting in significant performance improvements.
- FP16 and FP32 Computation Adjustments: These pull requests address issues with FP16 matrix multiplication in GLM4 models by enforcing FP32 computation. They aim to improve prompt processing speed on certain GPU architectures.
- Projector Naming and Compatibility: The pull requests focus on improving naming conventions for projectors in the codebase and ensuring compatibility with various vision models. They replace abstract names with descriptive ones and remove unnecessary patterns.
- Multimodal Projector Conversion: These pull requests introduce experimental support for converting multimodal projectors into GGUF files, eliminating the need for separate conversion scripts. They also add a
--no-mmproj-offload
argument for debugging purposes.
- GGML Synchronization and Enhancements: The pull requests involve synchronizing the 'ggml' component by implementing faster convolution kernels and making code improvements. They also enhance low-precision floating point conversions for better performance.
- Remote Content Downloading: This pull request introduces the
common_remote_get_content
function, facilitating image downloads from remote servers. It includes features like maximum size and timeout support, along with corresponding tests.
- SSE 4.2 and x64 Base Variant Support: The pull request adds support for SSE 4.2 and a x64 base variant to the ggml project, enabling compatibility for CPUs without AVX support. It addresses issue #12866.
- Computation Graph Logging: This pull request introduces a feature to log the computation graph of a model to a CSV file, providing detailed information about each tensor. The program terminates immediately after the log is written.
- Documentation and Security Updates: The pull requests update documentation files for clarity and address security concerns by highlighting vulnerabilities in the RPC backend. They recommend against using the RPC backend in sensitive environments.
- Llama-bench Tool Enhancements: This pull request enhances the llama-bench tool by introducing separate throughput measurements for different processing stages. It provides more accurate performance metrics and corrects previous calculation errors.
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
Contributor | Commits | Pull Requests | Issues | Comments |
---|---|---|---|---|
ngxson | 179 | 14 | 5 | 43 |
ggerganov | 86 | 10 | 1 | 21 |
BradHutchings | 97 | 1 | 0 | 1 |
zhouwg | 89 | 0 | 1 | 0 |
jukofyork | 49 | 0 | 1 | 1 |
No author found | 45 | 0 | 0 | 0 |
CISC | 24 | 0 | 0 | 8 |
slaren | 5 | 2 | 0 | 23 |
qnixsynapse | 21 | 2 | 0 | 6 |
bandoti | 14 | 1 | 1 | 9 |