Weekly GitHub Report for Pytorch: May 12, 2025 - May 19, 2025 (12:02:34)
Weekly GitHub Report for Pytorch
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is v2.6.0
1.2 Version Information:
The PyTorch 2.6 release, created on January 29, 2025, introduces significant updates including support for torch.compile
with Python 3.13, a new performance-related feature torch.compiler.set_stance
, and enhancements to AOTInductor. Notable changes include the deprecation of publishing on Conda, the introduction of FP16 support on X86 CPUs, and a backward compatibility-breaking change in the default behavior of torch.load
.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
[BUG]
einops
is unsupported and break dynamo graph with torch 2.7: This issue reports a bug where theeinops
library, previously supported in PyTorch 2.6, is now causing a failure in the dynamo graph compilation process in PyTorch 2.7 due to an unsupported method call. The problem seems to be related to a change in howeinops
is imported, specifically affecting versions greater than 0.7.0, and a workaround is suggested by manually importingeinops._torch_specific
.- The comments discuss the need for
einops
to handleallow_in_graph
internally, with some suggesting that the issue should be addressed on theeinops
side. There is a cross-reference to an issue filed in theeinops
repository, and a suggestion to coordinate better betweeneinops
and PyTorch for future versions. A potential fix for PyTorch 2.7.1 is mentioned, with a plan to expand the import versions, and a contributor expresses interest in working on the issue. - Number of comments this week: 9
- The comments discuss the need for
-
[XPU] Kineto profiler fails on XPU with
PTI_ERROR_NOT_IMPLEMENTED
: This issue involves a failure of the Kineto profiler on XPU devices following an upgrade to the 2025.1 oneAPI, resulting in aPTI_ERROR_NOT_IMPLEMENTED
error when attempting to use the PyTorch Profiler. The problem appears to be related to missing libraries in the runtime package, specificallylibpti.so
, which is not included in the pip package but is present in the development package.- The comments discuss attempts to reproduce the issue, with some users able to replicate the error on different systems and others not experiencing it. A missing library in the runtime package is identified as the cause, and a temporary workaround involving installing additional packages and sourcing environment variables is suggested.
- Number of comments this week: 8
-
torch.set_ on a view does not sever view relation: This issue describes a bug in PyTorch where the
torch.tensordot
function creates a view that retains references to the underlying storage, preventing GPU memory from being freed even whenrequires_grad
is set toFalse
. The problem is resolved by using thedetach
method, which removes autograd traces and allows the memory to be released, unlike thematmul
operation which does not require such a workaround.- The comments discuss the root cause of the issue, which is the creation of computation graph traces by
tensordot
, and propose potential solutions such as modifyingtensordot
to avoid unnecessary autograd metadata whenrequires_grad=False
. There is also a discussion about the behavior of views and how to manage memory deallocation, with suggestions to modify the base storage directly or to introduce warnings in the documentation. - Number of comments this week: 7
- The comments discuss the root cause of the issue, which is the creation of computation graph traces by
-
[inductor] Make precompilation_timeout_seconds into a config instead of hardcoded it as 3600: This issue involves modifying the PyTorch project to change the precompilation timeout setting from a hardcoded value of 3600 seconds to a configurable parameter. The motivation for this change is to allow for more flexibility in benchmarking scenarios, such as testing a larger number of kernels without encountering a timeout.
- Multiple contributors expressed interest in working on the issue, with some asking to be assigned. One contributor attempted to resolve it but was unable to complete the task and decided to pursue another issue.
- Number of comments this week: 6
-
FSDP2 "got mixed torch.Tensor and DTensor": This issue describes a bug encountered when using FSDP2 (fully_sharded_data_parallel) in PyTorch, where calling
model.forward()
twice consecutively without an interveningloss.backward()
results in a mixed torch.Tensor and DTensor error. The user is unable to callloss.backward()
between the forward passes due to the need to compute the loss after both passes, and they suspect that activation checkpointing might be related to the problem, as it seems to affect the allgathering of parameters during the forward pass recomputation.- The comments discuss whether the issue is specific to FSDP or involves other parallelism methods, with the user confirming that the problem persists with FSDP and activation checkpointing. The user clarifies that they are not seeking gradient accumulation but are implementing a specific model training approach that leads to the error. It is noted that turning off activation checkpointing resolves the issue, suggesting a conflict between FSDP and activation checkpointing during the second forward pass. The user seeks potential workarounds for this problem.
- Number of comments this week: 5
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
- ImportError: cannot import name 'triton_key' from 'triton.compiler.compiler': This issue involves an ImportError encountered when attempting to import 'triton_key' from 'triton.compiler.compiler', which is causing a backend compiler failure in a PyTorch environment. The error occurs during the execution of a Python script that utilizes the OotdPipeline and attempts to compile certain components with Torch's compile function, specifically affecting the 'inductor' backend.
- Alternate algorithm for computing MaxPool2D under specific condition.: This issue proposes an alternative algorithm for computing the MaxPool2D operation in PyTorch when the stride is equal to 1, suggesting that a kernel size of 5 can be represented by two MaxPool2D operations with a kernel size of 3, and similarly for other kernel sizes. The motivation behind this approach is to reduce computational costs on the CPU by modifying the MaxPool2D layer directly, as demonstrated by testing code that shows a significant speedup in execution time.
- cuda_utils.so: failed to map segment from shared object: This issue involves a bug encountered when running a PyTorch model within a Docker container, where the execution of a cached shared object file,
cuda_utils.so
, fails due to a missing execution permission despite being run as the root user. The problem arises specifically in a Docker environment with atmpfs
permission set to1777
, causing an error message indicating a failure to map a segment from the shared object, which is crucial for the model's execution. - Enable UFMT on all files in PyTorch: This issue involves enabling uniform formatting (UFMT) across all files in the PyTorch codebase, as currently, approximately 1,500 files are not formatted according to the UFMT standards. The process requires removing file names from the
exclude_patterns
in theUFMT
section of the.lintrunner.toml
file and running a specific command to apply the formatting, with additional preparatory work needed to resolve known issues such as import cycles and misplaced annotations before the UFMT changes can be committed. - [JIT archive] Add a flag to not include debug files: This issue proposes the addition of a flag to the
torch.jit.save()
function in PyTorch to exclude.debug_pkl
files, which are primarily used for debugging purposes and can significantly increase the file size of TorchScript models compared to ONNX models. The motivation behind this feature request is to reduce the storage footprint of models, particularly for deployment on mobile devices, by eliminating unnecessary debug files that can occupy a substantial portion of the model's total size.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 90
Summarized Issues:
- PyTorch Inductor and Compilation Issues: This category includes various issues related to the PyTorch Inductor and compilation processes. Problems such as incorrect kernel fusion, compilation errors with Triton, and failures in dynamic shape handling are prevalent. These issues often result in performance inefficiencies or outright failures during model execution, particularly on specific hardware configurations.
- Distributed and Parallel Computing Challenges: Several issues highlight challenges in distributed and parallel computing within PyTorch. These include memory management problems, errors in multi-GPU setups, and inefficiencies in distributed data parallel training. Such issues can lead to increased memory usage, training slowdowns, or even training failures.
- Data Type and Numerical Precision Issues: PyTorch faces several issues related to data types and numerical precision. These include discrepancies in numerical results between CPU and GPU, handling of specific data types like
float16
, and precision errors in mathematical operations. Such issues can lead to incorrect model outputs or unexpected behavior during computations.
- Functionality and API Inconsistencies: There are multiple issues concerning inconsistencies in PyTorch's functionality and APIs. These include unexpected behavior in functions like
torch.dequantize
,torch.aminmax
, andtorch.tensordot
, as well as discrepancies in API behavior across different backends. Such inconsistencies can lead to confusion and errors in model development and deployment.
- Export and Serialization Problems: Issues in exporting and serializing models in PyTorch are highlighted, particularly with the
torch.export
function and ONNX exporter. These problems can hinder model deployment and integration with other systems, affecting the usability of PyTorch in production environments.
- Profiling and Performance Monitoring: PyTorch's profiling and performance monitoring capabilities face several challenges, including misleading profiler outputs and missing libraries for specific devices. These issues can obscure performance bottlenecks and complicate the optimization of PyTorch applications.
- Documentation and Build System Issues: Problems with PyTorch's documentation and build system are noted, such as incorrect links in documentation and inconsistent build outputs. These issues can lead to confusion and hinder the development process for users relying on accurate documentation and build artifacts.
- Memory Management and Optimization: Several issues pertain to memory management and optimization in PyTorch, including inefficient memory usage and the need for heterogeneous memory allocation support. Addressing these issues is crucial for optimizing performance on diverse hardware architectures.
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 42
Summarized Issues:
- PyTorch Functionality Bugs: This category includes various issues related to bugs in PyTorch's functionality, such as incorrect outputs, runtime errors, and discrepancies in behavior across different devices. These issues highlight problems with specific functions like
log_softmax
,scaled_dot_product_attention
, andtorch.compile
, which lead to incorrect computations or crashes under certain conditions.
- Compilation and Backend Errors: These issues involve errors related to PyTorch's compilation process and backend support, particularly on specific hardware like MPS devices and Windows 10. Problems include syntax errors in generated code and illegal instruction errors due to mismatched build environments.
- Testing and Continuous Integration Failures: This group of issues pertains to failures in PyTorch's testing and CI processes, including intermittent test failures and missing dependencies. These problems affect the stability and reliability of the CI pipeline, requiring ongoing investigation and fixes.
- Performance and Optimization Issues: These issues focus on performance bottlenecks and optimization challenges in PyTorch, such as excessive cudagraph re-recording and delays in tensor distribution operations. Addressing these issues often involves updates to libraries or changes in implementation strategies.
- Feature Requests and Enhancements: This category includes requests for new features or enhancements to existing PyTorch functionalities, such as the introduction of a
cuda_tools
utility package and modifications to thetorch.device
class for improved usability.
- Quantization and Precision Discrepancies: These issues highlight discrepancies in PyTorch's quantization functions and precision handling, where operations yield different results on CPU and GPU or across data types, indicating potential bugs in the computation process.
- Documentation and Usability Concerns: This group addresses issues related to PyTorch's documentation and usability, such as missing links in the documentation and misleading behavior in equality checks, which can hinder user experience and understanding.
- Security and Backporting Challenges: This issue involves the challenge of backporting a security fix to an older PyTorch version due to constraints, highlighting the difficulties in maintaining security across different versions.
- Infrastructure and Resource Management: These issues pertain to infrastructure challenges, such as large queue times and resource allocation problems, which affect the efficiency and responsiveness of PyTorch's development and testing environments.
- Model Export and Compatibility Issues: This issue describes a bug in model export functionality, where using
functools.partial
leads to anAttributeError
, affecting compatibility with common patterns in model development.
- Device and Data Type Support: This issue involves a lack of support for specific data types in PyTorch's operations, such as the absence of implementation for 'Float8_e4m3fn' in certain kernels, which is crucial for specific model operations.
- Regression and Precision Issues: This issue highlights a regression in PyTorch's precision handling on specific hardware, where operations produce incorrect results due to changes in precision settings, affecting the accuracy of computations.
- Dimension and Indexing Errors: This issue pertains to errors related to dimension mismatches and indexing conflicts in PyTorch, which can lead to runtime errors and incorrect tensor operations.
- Closed and Unresolved Issues: This issue involves a closed ticket with no further details provided, indicating a lack of resolution or follow-up on the reported problem.
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 158
Key Open Pull Requests
1. [Draft][Just for CI VAL]Xpu flex attn ci test: This pull request is a draft intended for continuous integration validation, focusing on enabling and testing the FlexAttention feature on XPU devices within the PyTorch project, including various updates and fixes such as enabling unit tests, addressing floating-point precision issues, and integrating device-specific configurations.
- URL: pull/153680
- Merged: No
- Associated Commits: 0f457, 2cc84, 113dc, b557d, 686e0, 5beac, 6da92, b51f8, 2ff07, bbc1f, e0be1, 9d6eb, 6ff44, 401b3, dce24, 75a52, 39971, 3e1d1, 453ec, 2c279, 21455, cbb7f, 58a68, 973eb, 272c8, 0b4e4, 33cb3, ca1f7, 28715, ba427, 7e69c, 24dc0, fd14d, f79f8, ee968, f7cff, 74939, a12cc, 377ee, 5250f, 91c37, b1c8e, e4bc4, f6b82, b83e4, 90b6f, b2103, 01079, fc93c, b7bbb, c8104, d2ee4, 54d8d, 685bf, ffdb4, e0d24, a6f26, 0e89c, 800be, 962ee, 6fc32
2. [Monitoring] Add util for linux build: This pull request aims to add a utility for Linux builds in the PyTorch project, as indicated by the title "[Monitoring] Add util for linux build," and it includes multiple commits primarily focused on removing logs, testing, and adding model tests, although it has not yet been merged.
- URL: pull/153456
- Merged: No
- Associated Commits: dda0f, 36c21, bb1b3, 9d084, 62da4, 2d8e3, 4e2ba, 1080a, c8d42, a4e13, 568e7, cd145, b0149, 62b16, a2aae, bf915, c1c96, db591, 541ee, 513fd, afef1, 2362f, 9a81d, e7503, efd4c, 2b31f, 532ed, 942c4, 02ab3, 4810f, 4c51f, fbbd1, 0ca34, 4ba45, d4a1f, cd8c6, a31f9, bc8e6, 50e29, 2066f, 6938d, 0de95, 263bc, 5ee40, 3a432, 85ad1
3. [Monitoring] enable local logs and add mac test monitoring: This pull request aims to enhance the monitoring capabilities of the PyTorch project by enabling local logging and adding test monitoring for macOS, allowing the upload utilization logic to operate using a local pointer instead of relying on data from S3, which could also benefit ROCm.
- URL: pull/153454
- Merged: No
- Associated Commits: 0683d, b3e98, 9e5e2, 46b43, 32fa0, 6c3d7, d0ab2, 917fe, 4d00e, 000c6, 9027d, c21d5, d2976
Other Open Pull Requests
- Compiled Region Optimization: This topic involves optimizing compiled regions in PyTorch by enabling a single dynamo trace with multiple backend specializations. The pull requests focus on capturing backend specializations into a SymbolicContext and installing a lazy specialized dispatch function to reduce dispatch time and cache pollution.
- Dynamic Whitelist and GuardDebugInfo Refactoring: Enhancements to the PyTorch framework include suggesting a dynamic whitelist to reduce recompilations due to dynamic shape changes. The pull requests also refactor GuardDebugInfo to separate verbose code from failure reasons and provide detailed logging for recompilation triggers.
- CCCL 3.0.0 Compatibility: Updates to the ATen library address upcoming breaking changes in CCCL 3.0.0 by replacing deprecated CUB iterators with Thrust iterators and ensuring compatibility with CUDA 11.8. The pull requests also attempt to fix ROCm build issues.
is_known_contiguous
API Introduction: The introduction of theis_known_contiguous
API optimizes reshape operations and tensor metadata computation by storing a contiguous attribute only when definitively known. This improves efficiency in code paths that previously relied on theis_contiguous
check.
- Test Resilience Enhancements: Enhancements to the
test_create_graph_and_full_backward_hook_cycle
make it more resilient to unrelated warnings. The pull requests involve multiple updates and commits addressing this issue in the PyTorch GitHub repository.
- Code Generation Refactoring: Refactoring efforts focus on minimizing the generation of NULLs in the code generation process. The pull requests include changes to
output_graph.py
to support nested graph breaks as part of a series of related updates.
- Documentation Updates: The PyTorch project's documentation is enhanced by updating the serialization docs. The pull requests are part of a stack of changes managed through the ghstack tool.
- Subgraph Input Layout Consistency: An issue with subgraphs not freezing layouts is addressed by constructing subgraphs with benchmarking arguments instead of example inputs. This ensures consistency between generated
example_inputs
andargs
used for benchmarking.
- User-Defined Sets Support: Support for user-defined sets is introduced in the PyTorch project. The pull requests are part of a stack of related changes tracked via the ghstack tool.
- Runtime Assertion Code Generation: A code generation issue is addressed by ensuring input nodes representing symbols used in runtime assertions are not deleted or replaced. This prevents invalid code generation and ensures correct runtime assertion emission.
- RMSNorm Implementation and Documentation: A fused implementation of RMSNorm is introduced, and the documentation of the RMSNorm class is enhanced by adding missing argument descriptions. The pull requests include several commits addressing fixes and improvements.
- ONNX Opset Support: Updates to the
onnx->symbolic_opset23.py
file add support for opsets 21, 22, and 23. The pull requests include a refactor of thesymbolic_opset23.py
file, addressing issue #153687.
- Inductor
scaled_mm
Operation Support: Support for thescaled_mm
operation is introduced in the Inductor component of PyTorch. The pull requests follow a specific pattern to integratescaled_mm
functionality and address issues such as removing unsupported features.
- HOP-ification of Out-of-Tree Functions: Enhancements to the compilation process allow the HOP-ification of out-of-tree functions. The pull requests are part of a series of related changes tracked through the ghstack tool.
- Autotune Cache Rechecking: The need to recheck the autotune cache when loading statically launchable Triton kernels from FxGraphCache is addressed. The pull requests ensure the best configuration is used even if derived from coordinate descent tuning.
- Test Contamination Prevention: Test contamination is prevented by ensuring the preferred backend setting for cuBLAS and cuBLASLt is not carried over across tests. Tests that need to evaluate both backends are explicitly parametrized to prevent unexpected behavior.
- Magma Version Update: The PyTorch project is updated to use Magma version 2.9.0. The pull requests address issues such as fixing typos, correcting spacing errors, and resolving ninja build errors.
- Benchmark Debug Information Management: Unnecessary compiler benchmark debug information is prevented from being uploaded to the benchmark database. The pull requests focus on avoiding rapid growth and bloat of the database.
- Visual Studio 2022 Optimization Issue: An issue caused by Visual Studio 2022's over-aggressive optimization is addressed by adding a flag to disable problematic optimization. The pull requests confirm the solution through local tests.
- Traceable C++ Reducer Introduction: A work-in-progress, traceable C++ reducer is introduced as part of a stack of related changes. The pull requests involve multiple updates and collaboration with several contributors.
- AOTAutogradCache Composability: An issue with the composability of the AOTAutogradCache is addressed by adding a tracing context manager around the specialized compile process. The pull requests ensure the caching infrastructure can access the ShapeEnv.
- Static Package Information Transfer: Static package information is transferred from the
setup.py
file to thepyproject.toml
file. The pull requests are part of a series of updates tracked by ghstack.
- Data Dependency Management: Missing data dependencies due to mutations are addressed. The pull requests include references to internal documentation for testing details.
- Empty List 'dim' Parameter Handling: An issue with treating an empty list for the 'dim' parameter as equivalent to 'None' is addressed. The pull requests implement the fix, adjust conditional logic, and add a corresponding test.
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 226
Key Closed Pull Requests
1. [Set] Add set.issubset
and set.issuperset
: This pull request proposes the addition of set.issubset
and set.issuperset
methods to the project, as part of a series of related changes tracked through a stack of pull requests, although it has not been merged yet.
- URL: pull/152902
- Merged: No
- Associated Commits: 86a00, 45494, 38079, 8e815, 7a75b, 5a90c, 3bfc3, b583a, 3511f, 99fa6, 0ee20, 8d825, ccdb9
2. [Set] Raise KeyError
if elem not contained in the set: This pull request aims to modify the behavior of a set in the PyTorch project by raising a KeyError
when an element is not contained within the set, as indicated by the title and the series of related commits.
- URL: pull/152903
- Merged: No
- Associated Commits: 62f33, c7f9d, b4693, 0a8fa, bf6ea, 01cbc, 9468c, 84242, 4d324, cf62f, 24654, 4310a, 8883d
3. [Set] Add set.difference(_update)
: This pull request proposes the addition of a set.difference(_update)
function to the project, as part of a larger stack of related changes, but it was ultimately not merged.
- URL: pull/152905
- Merged: No
- Associated Commits: 214d7, ed3e1, 9631f, f9c59, efae7, db94c, 30c01, ca6cd, 25de0, 5861d, 3521d, 38cb2, c9b03
Other Closed Pull Requests
- Set Method Enhancements: This series of pull requests focuses on enhancing the functionality of set methods in the PyTorch project. The changes include adding new methods like
set.intersection_update
andset.symmetric_difference_update
, modifyingset.pop()
to raise aKeyError
on empty sets, and supporting multiple arguments inset.union
andset.update
methods.
- Error Handling Improvements: These pull requests aim to enhance error handling in the PyTorch project by raising
TypeError
for argument mismatches and unhashable arguments. They ensure robust code execution by preventing runtime errors and are part of a larger stack of related changes.
- Compile-Time and Performance Optimizations: This set of pull requests focuses on optimizing compile-time performance and runtime efficiency in the PyTorch project. They include caching function signatures for faster inlining and computing logging-related flags at compile-time to enhance performance.
- Gradient and Graph Enhancements: These pull requests aim to improve gradient computation and graph handling in the PyTorch library. They support higher order gradients with
create_graph=True
and incorporate backend specializations in thecompile_and_call_fx_graph
function.
- Quantization and Batch Normalization: This pull request introduces a new operation,
onednn.qbatch_norm2d
, for computing uint8 batch normalization on the CPU. It offers performance comparable to existing methods and supports additional output data types.
- Linter and Code Quality Improvements: These pull requests focus on improving code quality by updating the Ruff linter and introducing a linter to prevent hardcoding of "cuda" in test cases. They address false negatives, improve NOQA comment validation, and prevent test failures on XPU.
- Memory Layout and Performance: This pull request addresses performance degradation in the FP8 flex attention mechanism by enforcing specific memory layouts. It ensures tensors are stored in column-major format to reduce runtime and improve efficiency.
- Logging and Feature Usage: This pull request enhances the logging mechanism by indicating dynamic shape usage in models. It helps identify cases where dynamic shapes should have been used but were not due to incorrect configurations.
- Matrix Multiplication Optimization: This pull request introduces a fallback mechanism to switch from
bmm
tomm
when the batch size is 1. It leverages specializedmm
kernel optimizations for performance improvements without regression.
- Parallel Loss and Dimension Handling: This pull request addresses a negative dimension issue in the parallel loss context manager. It implements a solution similar to a proposed issue, with contributions from multiple reviewers.
- Documentation and Navigation Improvements: This pull request reorganizes the right navigation and improves the documentation structure. It moves community links, adds an introduction, and fixes various links to enhance user experience.
- Shape Mismatch and Autotuning: This pull request addresses shape mismatches during autotuning with AOTI. It resolves issues caused by unbacked symbolic integers not being replaced, preventing runtime errors during tensor operations.
- URL Corrections: This pull request aims to fix multiple URL issues within the PyTorch project. It involves several commits to correct these URLs, although it was ultimately not merged.
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
- Fix AsyncMM not compiled with SM90a issue
- Toxicity Score: 0.55 (Frustration expressed, Tense exchange, Mediation attempt)
- This GitHub conversation involves username1 expressing concern over a compilation issue, with username2 providing a potential solution. Username1 responds with frustration when the solution does not resolve the issue, leading to a tense exchange. Username3 attempts to mediate by suggesting alternative approaches, but the tone remains strained as username1 continues to express dissatisfaction.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
Contributor | Commits | Pull Requests | Issues | Comments |
---|---|---|---|---|
malfet | 200 | 16 | 8 | 103 |
anijain2305 | 200 | 15 | 2 | 9 |
guilhermeleobas | 187 | 18 | 2 | 7 |
Skylion007 | 66 | 25 | 4 | 96 |
laithsakka | 77 | 20 | 14 | 32 |
swolchok | 108 | 8 | 0 | 17 |
bobrenjc93 | 91 | 12 | 4 | 17 |
henrylhtsang | 79 | 11 | 9 | 20 |
pianpwk | 70 | 11 | 3 | 7 |
cyyever | 50 | 18 | 0 | 23 |