Weekly GitHub Report for Llama.cpp: March 09, 2026 - March 16, 2026 (19:43:11)
Weekly GitHub Report for Llama.cpp
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is b4991
1.2 Version Information:
The version released on March 29, 2025, introduces key updates and improvements, focusing on enhanced performance and user experience. Notable highlights include optimized features and bug fixes that streamline functionality and increase stability.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
[BUG-UNCONFIRMED] Eval bug: Vulkan throws vk::DeviceLostError on Qwen3.5 35B A3B: This issue reports a persistent vk::DeviceLostError occurring when running the Qwen3.5 35B A3B model with Vulkan backend on a multi-GPU setup consisting of two Radeon RX 7900 XTX cards, especially with large context sizes. The problem appears related to synchronization or workgroup tiling in Vulkan multi-GPU dispatch after a specific commit, and extensive debugging and testing have led to proposed fixes that improve stability and eliminate garbled data, with users confirming the issue is resolved in recent patches.
- The comment discussion involved reproducing the error, bisecting to identify the problematic commit, testing single vs multi-GPU setups, exploring driver and synchronization issues, proposing and iterating on patches, and confirming fixes that resolved the DeviceLostError and data corruption, with suggestions to update drivers and open new issues for related but distinct crashes.
- Number of comments this week: 32
-
[BUG-UNCONFIRMED] Eval bug: RPC server leaks CUDA graphs during inference, leading to OOM: This issue reports a memory leak in the RPC server when using the CUDA backend for inference with certain large models, notably Qwen 3.5 MoE variants. The leak occurs because each token generation triggers creation of new CUDA graph cache entries that are never freed, eventually exhausting system memory and causing the process to be killed by the OOM killer; setting the environment variable
GGML_CUDA_DISABLE_GRAPHS=1disables CUDA graph capture and prevents the leak.- The comment discussion thoroughly analyzes the root cause, identifying that the RPC server creates a new
ggml_contextwith new tensor pointers for every graph compute call, causing unbounded growth of CUDA graph cache keyed by these pointers; the leak frequency depends on model architecture and memory fitting, with MoE expert weights offloaded to CPU causing rapid leaks due to multiple RPC splits per token. Various tests confirm the leak rates and behaviors across models, and several fixes are proposed including bounded CUDA graph caches, per-split caching on the client, topology-based graph comparison, and RPC server context reuse to stabilize pointers and prevent unbounded cache growth. - Number of comments this week: 23
- The comment discussion thoroughly analyzes the root cause, identifying that the RPC server creates a new
-
[BUG-UNCONFIRMED] Eval bug: amd vulkan crashes with vk::DeviceLostError with context > 65k tokens: This issue describes a crash occurring with the Vulkan backend when processing large contexts (around 65k to 80k tokens) on single GPU setups using certain large Qwen3.5 models, resulting in a vk::DeviceLostError. The user has attempted bisecting to find the first bad commit and tested various driver versions and batch size configurations, noting that smaller models or reduced batch sizes can mitigate the crash, but the root cause remains unclear and may be related to model size, memory, or Vulkan specifics.
- The comments include requests for driver version details, sharing of bisect logs and testing results, suggestions to try less bleeding-edge drivers, observations about batch size impact on stability, and references to related issues and patches that helped multi-GPU setups but did not fully resolve the single GPU crashes.
- Number of comments this week: 10
-
[ENHANCEMENT] Feature Request: WebUI: Add "Model Information" button to Models in Router Mode UI: This issue requests adding a "Model Information" button to the Router Mode UI in the Web interface, similar to the one available in single-model mode, to allow users to easily view critical metadata such as file path, context size, chat templates, and quantization level for each model. The enhancement aims to improve usability by providing quick access to detailed model parameters directly within the router interface, which currently lacks this functionality and requires manual API queries or terminal log inspection.
- The comments discuss implementation details and UI improvements, including merging two parallel pull requests, addressing UI inconsistencies like nested scrollbars and menu separators, and expanding the information shown to include modality and argument details; contributors coordinate on combining efforts, testing, and refining the feature while emphasizing minimal and ergonomic changes.
- Number of comments this week: 10
-
[BUG] [AMD GPU] [CRITICAL SEVERITY] [CUDA] Eval bug: #17795 introduces subtle correctness errors: This issue reports a subtle correctness error introduced by a specific commit on the HIP backend that causes quality degradation in various models, particularly reproducible with the mistral-vibe client. The problem does not appear on other clients or backends like CUDA, and reverting the commit resolves the issue, prompting investigation into scheduler logic compatibility across backends.
- The comments discuss attempts to reproduce the issue on different hardware and backends, suggest disabling the feature for HIP as a temporary fix, and confirm that reverting the problematic commit resolves the problem; a fuzzing test also identified a reproducible input for the error.
- Number of comments this week: 7
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
As of our latest update, there are no stale issues for the project this week.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 81
Summarized Issues:
- Model Output and Reasoning Issues: Multiple issues report problems with model output formatting, reasoning, and thinking behavior. These include infinite loops during reasoning, incorrect handling of thinking flags, misclassification of output content, and improper display of reasoning tags, all leading to confusing or stalled model responses.
- issues/20302, issues/20325, issues/20356, issues/20409, issues/20516, issues/20550
- Backend Crashes and Memory Errors: Several issues describe crashes and memory errors occurring in various backends such as CUDA, Vulkan, ROCm, SYCL, and Metal. These include out-of-memory errors, device lost errors, illegal memory access, segmentation faults, and memory leaks, often triggered by specific hardware, model sizes, or backend configurations.
- issues/20315, issues/20331, issues/20338, issues/20418, issues/20462, issues/20467, issues/20481, issues/20482, issues/20490, issues/20509, issues/20515, issues/20554, issues/20564, issues/20598, issues/20608, issues/20610
- Compilation and Build Issues: Multiple reports highlight compilation errors, warnings, and build failures across different platforms and configurations. Problems include incorrect compiler selection, conflicts with OpenSSL targets, unrecognized CPU targets, and policy warnings in CMake, causing build failures or incorrect builds.
- issues/20311, issues/20317, issues/20412, issues/20413, issues/20415, issues/20581
- Web UI and Server Functionality Bugs: Several issues concern bugs in the web UI and server behavior, including missing UI elements on mobile, duplicate requests, session header transmission failures, proxy misconfigurations causing security risks, and improper handling of API keys. These issues affect usability and security of the server and UI.
- issues/20326, issues/20471, issues/20371, issues/20372, issues/20475, issues/20568
- Model Loading and Initialization Failures: Issues report crashes and assertion failures during model loading or context cache restoration, often related to specific models or quantization types. These failures prevent successful initialization and usage of models in various environments.
- issues/20358, issues/20473, issues/20608
- Performance Regressions and Optimization Requests: Some issues describe performance regressions in Vulkan and other backends, slower processing with certain options enabled, and requests for CI optimization and new feature support to improve efficiency and compatibility.
- issues/20322, issues/20386, issues/20485, issues/20492, issues/20597, issues/20603
- Feature Requests for UI and Model Control: Several requests seek enhancements such as adding toggle buttons for AI thinking, configurable approval for MCP tool calls, model information display in router mode, control vector support, and graceful termination of reasoning budget to improve user control and experience.
- issues/20343, issues/20541, issues/20547, issues/20557, issues/20632
- Model Output Parsing and Template Bugs: Issues include template parsing errors causing memory allocation failures, bugs in autoparser causing infinite loops or incorrect output, and problems with JSON schema parsing leading to server errors, all affecting correct output generation and formatting.
- issues/20305, issues/20425, issues/20344, issues/20500, issues/20532
- Hardware-Specific Backend Compatibility Problems: Reports highlight issues with specific GPUs or hardware configurations, such as Intel Arc GPUs causing TDR crashes or segmentation faults, AMD GPUs with memory aperture violations, and incompatibilities in backend support for certain operations or kernels.
- issues/20338, issues/20423, issues/20554, issues/20564, issues/20619
- Model-Specific Bugs and Crashes: Certain models like Qwen3.5 and GLM variants exhibit unique bugs such as gibberish output, crashes during inference, or incorrect default behaviors, requiring manual workarounds or fixes to restore expected functionality.
- issues/20321, issues/20464, issues/20473, issues/20516, [issues/20550](https://github.com/issues/20550]
- Cache and Prompt Handling Issues: Problems with prompt caching and context cache restoration cause parsing errors, infinite loops, and incorrect tool call outputs, impacting model interaction stability and performance.
- issues/20532, [issues/20614](https://github.com/issues/20614], issues/20473
- Miscellaneous Bugs and Errors: Other issues include segmentation faults triggered by specific prompts, incorrect process priority handling, and test failures due to missing backend loads or assertion failures in tensor operations.
- issues/20496, issues/20360, issues/20611, issues/20636
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 59
Summarized Issues:
- Vulkan Backend Stability and Output Issues: Several issues report crashes, freezes, or corrupted output when using the Vulkan backend, especially with long contexts or specific GPU setups. These problems include random vk::DeviceLostError crashes after processing large token contexts, garbled output on certain AMD GPUs, and crashes on AMD Radeon RX 6600 due to GPU timeouts under heavy load.
- issues/19955, issues/20387, issues/20465, issues/20514, issues/20517
- Performance Degradation on ROCm/HIP Backend: Multiple reports highlight severe performance drops when running Qwen 3.5 models on AMD ROCm hardware compared to Vulkan, caused by kernel dispatch overhead and inefficient fused kernels. Enabling HIP graphs partially improves performance at low context depths but not at higher ones, and regressions after specific commits further reduce throughput.
- issues/20218, issues/20237, issues/20292, issues/20354
- Model Loading and Memory Issues: There are crashes and errors related to model loading due to tensor dimension mismatches, memory allocation failures, and integer overflow bugs. These include failures loading large models on CUDA due to out-of-memory errors, dimension alignment assertion failures on Windows, and a 32-bit overflow limiting memory target settings.
- issues/20307, issues/20308, issues/20431, issues/20466
- Logging Spam and Parser Warnings: Several issues describe excessive log spam with repeated messages like "No parser definition detected, assuming pure content parser" flooding the console during completions or raw requests. This logging problem was introduced by a recent commit but does not affect generation functionality.
- issues/20309, issues/20310, issues/20327
- JSON and Tool Call Argument Bugs: Bugs have been reported where streamed tool call arguments or large system prompts produce invalid JSON with mixed quotes or unquoted keys, causing downstream parsing failures in clients. These issues affect both streamed and non-streamed requests and are linked to the handling of tool call arguments in responses.
- issues/20352, issues/20359
- Multi-GPU and PCIe Topology Issues: Running large models on multi-GPU setups with non-peer-to-peer PCIe topologies causes garbled output at larger context sizes, while single GPU runs remain correct. Additionally, heavy PCIe and memory bandwidth usage on mixed GPU systems can cause device lost errors due to driver timeouts.
- issues/20052, issues/20387
- Web UI and Usability Problems: The Web UI has issues such as not automatically selecting loaded models by default and restricting model selection to only the first loaded model, forcing users to unload models to switch. Additionally, the MCP WebUI fails to route image attachments to vision-capable models, limiting functionality.
- issues/20314, issues/20382, issues/20488
- Backend Compilation and Dependency Issues: Compilation failures and runtime crashes occur with the SYCL backend on certain Linux distributions due to missing headers and incompatible compiler versions. Vulkan container image builds also fail due to Python dependency conflicts related to PyTorch versions.
- issues/20368, issues/20524
- Thread Safety and Data Races: A data race was detected in the
llama-completionprogram when using the CPU backend with multiple threads and thread sanitization enabled, traced to the introduction of OpenMP for multi-thread processing. - issues/20144
- Model-Specific Bugs and Output Formatting: Specific models exhibit unique bugs such as extra spaces after Chinese characters, output starting with unexpected tokens like "", or getting stuck in loops outputting repeated characters when flash attention is enabled.
- issues/20324, issues/20548, issues/20555
- Memory and Cache Management Issues: Problems include inefficient memory access patterns causing cache thrashing in fused kernels, KV-cache prefix management causing frequent cache invalidation and performance degradation, and VRAM overuse in router mode leading to out-of-memory errors on large models.
- issues/20436, issues/20510, issues/20582
- Crash and Assertion Failures in Specific Models: Crashes and assertion failures occur in models like Nemotron-3-Nano-30B-A3B due to tensor dimension alignment issues or GGML_ASSERT failures, sometimes resolved by clean rebuilds or suspected environment mismatches.
- issues/20307, issues/20570, issues/20587
- CUDA and GPU Usage Anomalies: Using BF16 KV-cache on NVIDIA RTX PRO 6000 GPUs causes CPU usage spikes and slow token processing due to fallback from CUDA, and missing CUDA memory leads to allocation failures when loading very large models.
- issues/20497, [issues/20431](https://github.com/issues/20431]
- Documentation and Versioning Issues: Documentation is missing for features like the MCP CORS proxy, and version numbers in configuration files are incorrect, requiring updates to match published versions.
- issues/20384, issues/20604
- Miscellaneous Bugs and Requests: Other issues include pyright warnings in conversion scripts, segmentation faults when running CUDA builds, and draft issues needing updates or fixes related to previous discussions.
- issues/20417, issues/20622, issues/20491, issues/20494
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 67
Key Open Pull Requests
1. vulkan: chunked parallel kernel for GATED_DELTA_NET: This pull request introduces a chunked parallel kernel infrastructure for the Vulkan implementation of GATED_DELTA_NET, including three new compute shaders for intra-chunk computation, inter-chunk state propagation, and output reconstruction, along with performance improvements and fixes, while currently keeping chunked dispatch disabled pending further optimizations.
- URL: pull/20377
- Associated Commits: 949a7, 313ef, bf136, 992d7, b0323, efbde, d2fab, e22c2, 530e5, 88396, ab79f, 088cb, 88c02, c6715
2. CI: add hip quality check: This pull request introduces a new continuous integration workflow called "hip-check" designed to improve the quality of the HIP backend by enforcing Werror on builds to catch unrollable pragma unroll loops, ensuring compatibility with ROCm version 6.1 to prevent usage of newer unsupported functions, and adding a script to detect significant register spills in GCN/CDNA kernels, with the workflow intended to prompt investigation without necessarily blocking pull requests.
- URL: pull/20430
3. native QLoRA training with reward-weighted SFT and GRPO: This pull request introduces a native QLoRA training pipeline that integrates reward-weighted supervised fine-tuning and GRPO online reinforcement learning for quantized GGUF models, featuring new training modes, CUDA kernel optimizations, layer freezing, gradient checkpointing, and compatibility enhancements for LoRA adapters.
- URL: pull/20453
Other Open Pull Requests
- Flash Attention and Quantization Enhancements: Multiple pull requests introduce and optimize flash attention implementations and quantization support across CPU, GPU, and CUDA backends. These include MXFP flash attention with SIMD acceleration and Walsh-Hadamard rotation, native bf16 flash attention support in CUDA, and Metal GPU support for NVFP4 quantization with significant speedups on Apple Silicon.
- Vulkan and WebGPU Backend Stability and Performance Fixes: Several pull requests address synchronization, queue usage, and profiling improvements in Vulkan and WebGPU backends. Fixes include event wait submission, command buffer resets, timeline semaphore replacements, queue restrictions on RADV drivers, and batching timestamp query resolutions to reduce overhead.
- Model Loading and Conversion Improvements: Pull requests enhance model loading flexibility and conversion accuracy by adding support for loading GGUF models from POSIX file descriptors and fixing errors in converting Qwen3.5 models to GGUF format. These changes improve compatibility with Android restrictions and ensure correct tensor handling during conversion.
- Expert Profiling and Pruning Tools for MoE Models: A pull request introduces REAP-style expert profiling and pruning tools for Mixture-of-Experts models, including a C++ profiler for saliency scoring and Python pruners that modify GGUF files or BF16 checkpoints to reduce model size while maintaining quality.
- Parser and Autoparser Refinements: Updates to the gpt-oss parser and autoparser improve grammar adherence, reasoning tag handling, and trigger rule enforcement. These changes remove forced open/closed formats, fix response format enforcement, and eliminate invalid test cases to enhance parsing accuracy and flexibility.
- CPU Feature Detection and Build Configuration Fixes: Pull requests add OS state validation for SIMD feature detection, update CI scripts to detect CPU features for compiler flags, and fix build warnings related to the KleidiAI backend. These ensure safer CPU instruction usage and cleaner build processes.
- RPC and Networking Enhancements: An optional native RDMA transport is added to the ggml-rpc backend, improving RPC communication performance on supported hardware while maintaining backward compatibility through an opt-in flag.
- Model Quantization Interface Refactor: The function
llama_model_quantize_paramsis refactored to provide a pure C interface, enhancing accessibility and integration with C-based projects.
- Web UI Improvements: Updates to the web user interface include making the server authoritative for sampling parameter defaults, fixing badge displays, enabling the "Model Information" dialog in router mode, adding model list headers, and improving scrolling usability by removing nested scrollbars.
- Computation Graph Handling in OpenVINO GGML Decoder: A new function and interface enhancements enable detection and handling of split-model computation graphs, allowing the decoder to identify fragments and adjust computation logic for improved graph processing and fallback support.
- Bug Fixes and Safeguards: Various fixes include replacing UINT64_MAX with LONG_TIMEOUT to isolate a Dawn/llvm-pipe bug, preventing simultaneous use of conflicting flags in
llama-bench, and correcting JSON string misinterpretation causing test failures on Windows.
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 145
Key Closed Pull Requests
1. update: This pull request involves the deletion of numerous directories and files across the repository, effectively removing a wide range of examples, configuration files, scripts, tests, and documentation, but it was not merged.
- URL: pull/20389
- Associated Commits: fecd0, d2089, d356a, 8c580, 5d4cb, fb0f0, 0c514, 746c3, 66a23, f6d7b, 381ef, 183d1, 7d437, ebd65, 2f6ce, 50ca4, a93dc, ffde3, 5d279, 1c9ca, 35666, f2cc6, 25d31, a5da8, 258a1, dbe33, 7204e, 13e9a, 6205c, 5ae06, 4bf1c, e1755, 2a8e5, 099cc, 2f170, cf61b, 7a017, 8d0ae, 2a307, e2b24, 659d7, 7ee5a, 2576a, 864ea, 8d387, 9ba7a, e268a, 988ed, c8188, 29b2e, 34a66, 28d12, 7024d, 214db, 564f9, 8b144, ae253, 95828, c7900, d6648, 1768a, ac0c2, 9de4b, 13074, 7df78, a605a, c78cc, 18be3, c4839, 67277, 62c36, 05342, 4111b, 63b5d, b347f, fe9d4, 8e636, 93e45, c1c19, 2a7ae, 208a0, 03db9, 16c78, 95ab8
- Associated Commits: fecd0, d2089, d356a, 8c580, 5d4cb, fb0f0, 0c514, 746c3, 66a23, f6d7b, 381ef, 183d1, 7d437, ebd65, 2f6ce, 50ca4, a93dc, ffde3, 5d279, 1c9ca, 35666, f2cc6, 25d31, a5da8, 258a1, dbe33, 7204e, 13e9a, 6205c, 5ae06, 4bf1c, e1755, 2a8e5, 099cc, 2f170, cf61b, 7a017, 8d0ae, 2a307, e2b24, 659d7, 7ee5a, 2576a, 864ea, 8d387, 9ba7a, e268a, 988ed, c8188, 29b2e, 34a66, 28d12, 7024d, 214db, 564f9, 8b144, ae253, 95828, c7900, d6648, 1768a, ac0c2, 9de4b, 13074, 7df78, a605a, c78cc, 18be3, c4839, 67277, 62c36, 05342, 4111b, 63b5d, b347f, fe9d4, 8e636, 93e45, c1c19, 2a7ae, 208a0, 03db9, 16c78, 95ab8
2. common: improve error message when setting process priority fails: This pull request aims to improve the error message on POSIX systems when setting process priority fails due to insufficient privileges, clarifying that elevated permissions (such as running with sudo) may be required when using negative nice values with options like --prio high or --prio realtime.
- URL: pull/20584
- Associated Commits: 266eb, 7ccfa, aa50b, 8c219, 109e6, 20489, 76356, 60ce9, d7bf9, 3c78b, db7c5, 30864, 96175, f902f, b497f, 1c54d, dcfb7, 26959, fe8df, 47b77, ee82c, 715d6, 16486, a3b35, f52f8, c341a, 64c3b, d408d, a8151, daeb1, 45aad, 6a89c, 062ec, d7dd7, cfb6c, f058c, 95294, 6b625, 1be9a, 4f219, b93da, 66bbd, e3606, 7d181, 98a66, a1fd9, 2e645, c6b52, 15a40, b8f07
- Associated Commits: 266eb, 7ccfa, aa50b, 8c219, 109e6, 20489, 76356, 60ce9, d7bf9, 3c78b, db7c5, 30864, 96175, f902f, b497f, 1c54d, dcfb7, 26959, fe8df, 47b77, ee82c, 715d6, 16486, a3b35, f52f8, c341a, 64c3b, d408d, a8151, daeb1, 45aad, 6a89c, 062ec, d7dd7, cfb6c, f058c, 95294, 6b625, 1be9a, 4f219, b93da, 66bbd, e3606, 7d181, 98a66, a1fd9, 2e645, c6b52, 15a40, b8f07
3. ggml : add NVFP4 quantization type support for metal: This pull request adds support for the NVFP4 quantization type to the ggml library's Metal backend, including implementation improvements, performance optimizations on ARM NEON, fixes for shader compilation, and validation with benchmark speedups demonstrating significant acceleration on Metal compared to CPU.
- URL: pull/20060
- Associated Commits: f5a13, 8b4e7, e3e13, cfe06, 984aa, cd84c, befad, cf1d5, 7c730, 622a6, 3c6f4, 06e14, 04870, a8f8f, fe52c, 4f232, dc5a0, b9985, 5951d, 36491, fa018, 68a6e, ee52f, 81218, a27ee, 73bd0, d6d33, b0c75, a26f2, 6e434, 52b9b, 0519b, 2009a, 733ca, ff1ee, 3f97d, b5912, fa669, 9fa4d, f7523, 27cf7, 09640, 2e96f, da39e
- Associated Commits: f5a13, 8b4e7, e3e13, cfe06, 984aa, cd84c, befad, cf1d5, 7c730, 622a6, 3c6f4, 06e14, 04870, a8f8f, fe52c, 4f232, dc5a0, b9985, 5951d, 36491, fa018, 68a6e, ee52f, 81218, a27ee, 73bd0, d6d33, b0c75, a26f2, 6e434, 52b9b, 0519b, 2009a, 733ca, ff1ee, 3f97d, b5912, fa669, 9fa4d, f7523, 27cf7, 09640, 2e96f, da39e
Other Closed Pull Requests
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
| Contributor | Commits | Pull Requests | Issues | Comments |
|---|---|---|---|---|
| matrixportalx | 182 | 0 | 0 | 0 |
| ggerganov | 105 | 26 | 0 | 29 |
| CISC | 37 | 10 | 0 | 57 |
| pwilkin | 63 | 12 | 2 | 19 |
| 0cc4m | 49 | 7 | 0 | 36 |
| rodgerhubhay | 84 | 2 | 0 | 0 |
| allozaur | 76 | 1 | 0 | 0 |
| richarddd | 63 | 5 | 0 | 1 |
| No author found | 52 | 0 | 0 | 0 |
| aldehir | 23 | 4 | 0 | 23 |