Weekly GitHub Report for Tensorflow: December 01, 2025 - December 08, 2025 (12:05:13)
Weekly GitHub Report for Tensorflow
Thank you for subscribing to our weekly newsletter! Each week, we deliver a comprehensive summary of your GitHub project's latest activity right to your inbox, including an overview of your project's issues, pull requests, contributors, and commit activity.
Table of Contents
I. News
1.1 Recent Version Releases:
The current version of this repository is v2.19.0
1.2 Version Information:
Released on March 5, 2025, TensorFlow version 2.19.0 introduces breaking changes to the tf.lite API, including the deprecation of tf.lite.Interpreter in favor of ai_edge_litert.interpreter and changes to certain C++ constants for improved API flexibility. Key updates also include runtime support for the bfloat16 data type in the tfl.Cast operation, alongside the discontinuation of standalone libtensorflow package publishing, while still allowing unpacking from PyPI.
II. Issues
2.1 Top 5 Active Issues:
We consider active issues to be issues that that have been commented on most frequently within the last week. Bot comments are omitted.
-
NaN loss on multi-GPU training with certain pairs of GPU: This issue reports a bug where training a model using multi-GPU setups with certain pairs of NVIDIA GeForce GTX 1080 Ti GPUs results in the loss value becoming NaN, causing training to fail. The user provides detailed hardware and software configurations, including TensorFlow versions, CUDA/cuDNN versions, and a minimal reproducible example, highlighting that the problem occurs only with specific GPU pairings.
- The comments reference related issues and include a brief exchange where a contributor expresses interest in working on the issue and receives confirmation that it is available for assignment.
- Number of comments this week: 3
-
native tensorflow runtime: This issue concerns an ImportError encountered when attempting to load the native TensorFlow runtime on a Windows 64-bit system using Python 3.11, resulting in a DLL initialization failure. The user reports using TensorFlow version 3.11 with custom code, and the error occurs during the import of TensorFlow's internal modules, preventing successful runtime loading.
- The comments include a request from a contributor to work on the issue, followed by a detailed response asking for environment details, TensorFlow version, and compatible packages to diagnose potential causes such as missing MSVC redistributables, CPU instruction support, or 32-bit installations. The user then provides some package information and TensorFlow version, while the responder suggests the issue might be a duplicate of previously reported problems and requests further installation details.
- Number of comments this week: 3
-
XLA Compile Error: Conv2D fails due to negative dimension from kernel larger than input: This issue describes a bug encountered when using XLA compilation in TensorFlow, where a Conv2D operation fails with a ValueError due to a negative dimension size caused by the kernel size being larger than the input dimensions. The error occurs only under
@tf.function(jit_compile=True)and not in eager execution, indicating a discrepancy between the two execution modes when the Conv2D input tensor is smaller than the kernel.- The first comment reports successful execution of the provided code without encountering the error, showing expected input and output shapes and confirming XLA compilation. The second comment requests another contributor to investigate and verify if they can reproduce the bug.
- Number of comments this week: 2
-
Segmentation fault when using tf.constant_initializerwith complex128 variables in TensorFlow 1.x: This issue reports a segmentation fault occurring when using
tf.constant_initializerwithcomplex128variables in TensorFlow 1.x, specifically during the initialization of variables with complex values. The user expects the variable to initialize successfully but instead encounters a core dump, indicating a critical failure in the process.- A contributor has submitted a related pull request aimed at fixing the segmentation fault and is requesting feedback and review from the original reporter and other community members.
- Number of comments this week: 1
-
XLA fails to compile tf.cond when true_fn and false_fn return tensors of different dtypes: This issue describes a bug where XLA compilation fails when using tf.cond with true_fn and false_fn branches that return tensors of different data types, specifically float32 and float64. Although the model runs correctly in eager mode, the XLA compiler raises a TypeError during graph compilation because the output tensors from the two branches do not have matching dtypes, causing the program to abort.
- The comment advises ensuring that both branches of tf.cond return tensors with the same dtype to avoid the compilation error, highlighting that operations involving different dtypes in tf.cond lead to this failure.
- Number of comments this week: 1
2.2 Top 5 Stale Issues:
We consider stale issues to be issues that has had no activity within the last 30 days. The team should work together to get these issues resolved and closed as soon as possible.
- TF-TRT Warning: Could not find TensorRT: This issue describes a problem where TensorFlow on Ubuntu 22.04 cannot detect TensorRT despite having a compatible NVIDIA RTX 3050 Ti GPU and CUDA 12.4 installed, with the user having to manually downgrade the NVIDIA driver from version 550 to 535 to maintain system stability. The user reports persistent errors and warnings related to TensorRT not being found, despite multiple reinstallations, and is seeking assistance to resolve this configuration and compatibility issue.
SystemErrorintf.ensure_shapeandtf.compat.v1.ensure_shapewhendtypeofshapeistf.uint64and its value is too large.: This issue reports a bug in TensorFlow where usingtf.ensure_shapeortf.compat.v1.ensure_shapewith ashapetensor of typetf.uint64containing very large values close to 2^64 causes aSystemErrorandOverflowError. Specifically, when such largeuint64values are passed in eager execution mode, the functions fail with an internal error related to type checking, indicating improper handling of large unsigned 64-bit integers.- Feature Request: Integrate different Digital Signal Processing into tf.signal: This issue is a feature request proposing the integration of advanced digital signal processing (DSP) functionalities, similar to those found in the julius library, into TensorFlow's tf.signal module. The goal is to enhance TensorFlow's native capabilities for audio data augmentation, enabling researchers and developers to perform complex audio preprocessing and augmentation within the TensorFlow ecosystem without relying on external libraries.
- [DOCS] Missing complex input for Round op: This issue reports a documentation bug in TensorFlow where the
Roundoperation is described as supporting complex tensor inputs, but in practice, attempting to use a complex tensor with this operation results in an error, requiring users to round the real and imaginary parts separately. The user provides a reproducible example on MacOS with TensorFlow 2.15.0 and Python 3.9, demonstrating that the operation fails with a device-not-found error despite the documentation indicating support for complex types. - tf.raw_ops.Unbatch aborts with "Check failed: d < dims()": This issue reports a bug in TensorFlow version 2.17 where the operation
tf.raw_ops.Unbatchaborts with a fatal error message "Check failed: d < dims()" during execution. The problem occurs on Linux Ubuntu 20.04.3 LTS with Python 3.11.8 and can be reproduced using a specific code snippet that triggers the failure, causing the program to crash with a core dump.
2.3 Open Issues
This section lists, groups, and then summarizes issues that were created within the last week in the repository.
Issues Opened This Week: 19
Summarized Issues:
- XLA Compilation Failures with Symbolic Tensors and Loops: Multiple issues report XLA compilation errors when TensorFlow models use loops or operations involving symbolic tensors, such as tf.range, tf.sequence_mask, or loops over symbolic batch dimensions. These errors include TypeErrors and ValueErrors that do not occur in eager execution mode, indicating discrepancies in how XLA handles symbolic tensor operations within jit-compiled functions.
- issues/105638, issues/105642, issues/105643, issues/105648
- XLA Compilation Issues with Zero or Negative Tensor Dimensions: Several issues describe XLA compilation failures caused by zero-sized or negative spatial dimensions in tensors during Conv2D, MaxPooling2D, or related operations. These failures result in ValueErrors or negative dimension size errors that do not appear in eager mode, highlighting problems with XLA's shape inference and layout optimization for edge cases involving empty or undersized tensor dimensions.
- issues/105636, issues/105639, issues/105647, issues/105651
- XLA Compilation Failures Due to Variable Creation Inside tf.function: There are bugs where creating tf.Variable instances inside the call() method of TensorFlow models decorated with @tf.function(jit_compile=True) causes XLA compilation to fail with ValueErrors. This violates the requirement that variables must be created only once or outside the tf.function, causing compilation errors despite successful eager execution.
- issues/105644, issues/105654
- XLA Compilation and Resource Device Placement Conflicts: One issue reports a failure when using tf.lookup.StaticHashTable with XLA compilation because the resource is created on the CPU while XLA attempts to execute on the GPU. This cross-device resource access error does not occur in eager mode, indicating a problem with resource placement during XLA compilation.
- issues/105646
- XLA Compilation Failures with Unsupported Operations or Methods: Some issues describe XLA compilation errors caused by unsupported operations such as calling numpy() on symbolic tensors or using tf.numpy_function inside jit-compiled functions. These operations cause NotImplementedError or unknown TensorShape errors during compilation, although they work in eager mode.
- issues/105652, issues/105728
- XLA Compilation Output Handling Errors: An issue reports an AttributeError when an XLA-compiled TensorFlow function returns a tuple and the code attempts to access the shape attribute directly on the tuple output. This error occurs because tuples do not have a shape property, unlike individual tensor elements.
- issues/105729
- Multi-GPU Training NaN Loss Bug: A bug is reported where training a TensorFlow model on multi-GPU setups with certain NVIDIA GeForce GTX 1080 Ti pairs results in the loss value becoming NaN, causing training to fail. This issue affects model training stability on specific hardware configurations.
- issues/105552
- TensorFlow ImportError on Windows with Python 3.11: An ImportError occurs on Windows 64-bit with TensorFlow 3.11 and Python 3.11 due to a DLL initialization failure when importing the internal _pywrap_tensorflow module. This prevents TensorFlow from functioning properly on this platform and Python version combination.
- issues/105562
- Documentation Discrepancy for tf.strided_slice Parameter Types: There is a discrepancy between TensorFlow documentation and implementation for the tf.strided_slice function, where the docs state that the begin and sizes parameters must be int32 or int64 tensors, but the implementation also accepts int16 tensors. This inconsistency may cause confusion for users relying on the documentation.
- issues/105470
- Request for GPU AOT Compilation Guidance: A user requests documentation and examples for ahead-of-time (AOT) compilation and deployment targeting GPUs using generated PB, MHLO, or stablehlo files. The user encounters errors and lacks guidance for GPU backend support despite successful CPU compilation, indicating a need for improved documentation.
- issues/105768
2.4 Closed Issues
This section lists, groups, and then summarizes issues that were closed within the last week in the repository. This section also links the associated pull requests if applicable.
Issues Closed This Week: 0
Summarized Issues:
As of our latest update, there were no issues closed in the project this week.
2.5 Issue Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed issues that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed issues from the past week.
III. Pull Requests
3.1 Open Pull Requests
This section provides a summary of pull requests that were opened in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Opened This Week: 9
Key Open Pull Requests
1. Fix XLA JIT compilation with mixed-type dictionary keys (#105333): This pull request fixes XLA JIT compilation failures caused by returning dictionaries with mixed-type keys by implementing a multi-level fallback sorting strategy that ensures deterministic and type-consistent ordering of dictionary keys during the nest flattening process, thereby preventing TypeErrors and maintaining backward compatibility, with comprehensive tests validating the solution.
- URL: pull/105372
- Merged: No
- Associated Commits: 85d98, 742ac, 42d90, d01cf, be78b, 49899, 2d72f, a6f9c, 7f75d, 07ded, 3c7a9, 57d38, 7d433, 27e7c, 6b91e, 30789, ab523, 004b4
2. Fix xla keras initializers dynamic shapes: This pull request addresses a bug in TensorFlow where Keras initializers fail under XLA JIT compilation with dynamic shapes by safely extracting concrete tensor values using tensor_util.constant_value(), adding clear error messages to guide users toward using concrete shapes or initializing weights outside XLA-compiled functions, and includes comprehensive tests and a demonstration script to validate and illustrate the fix while maintaining backward compatibility.
- URL: pull/105371
- Merged: No
3. Fix #105366: This pull request fixes a segmentation fault in TensorFlow 1.x compatibility mode that occurs when initializing complex-valued variables using tf.get_variable() with dtype=tf.complex128 or tf.complex64 and a constant initializer, by adding support for complex types in GPU kernel registrations and update functors, along with comprehensive test cases.
- URL: pull/105369
- Merged: No
Other Open Pull Requests
- Segmentation fault fix in process_utils.cc: This pull request adds validation to limit thread count values for hat clamps to a maximum of 1024, logs warnings for excessively large values, and handles negative inputs by defaulting to auto-detection. It also includes a new test file to verify these changes, preventing segmentation faults.
[pull/105640]
- TOSA QuantizedType legalization correction: This pull request ensures the signed information from QuantizedType is correctly utilized during the legalization process in the TOSA component. It serves as a follow-up to a previous related change to improve correctness.
[pull/105376]
- tf.strided_slice documentation and type annotation fixes: This pull request fixes inconsistencies between the implementation and documentation of the tf.strided_slice function by correcting errors in its documentation. It also updates type annotations for the begin and size parameters to include int16.
[pull/105471]
- oneDNN primitive caching performance improvement: This pull request replaces std::unordered_map with absl::flat_hash_map in the "cache_" implementation, resulting in approximately 4.7 times faster GetOp() and 1.13 times faster SetOp() operations for inner-product and matmul primitives. This change significantly boosts caching efficiency.
[pull/105522]
- Crash fix in MKL backend for tf.nn.conv1d_transpose with dynamic batch size: This pull request adds logic to detect and correctly infer the batch size from the gradient tensor, preventing an INVALID_ARGUMENT error caused by the original shape handling code. It resolves crashes in the MKL backend when using dynamic batch sizes.
[pull/105574]
- CUDA convolution resource handle error fix with batch splitting: This pull request introduces an automatic batch-splitting fallback in
LaunchConvOpImpl()to handle CUDA invalid resource handle errors during convolutions on large tensors. It detects size violations, calculates safe batch sizes, and recursively processes convolutions in smaller chunks with thorough bounds checking to ensure correctness and prevent errors.
[pull/105370]
3.2 Closed Pull Requests
This section provides a summary of pull requests that were closed in the repository over the past week. The top three pull requests with the highest number of commits are highlighted as 'key' pull requests. Other pull requests are grouped based on similar characteristics for easier analysis. Up to 25 pull requests are displayed in this section, while any remaining pull requests beyond this limit are omitted for brevity.
Pull Requests Closed This Week: 5
Key Closed Pull Requests
1. Spam: Fix b202: This pull request addresses the security vulnerability B202 by modifying the safe_extract.py file to fix file extraction issues after path validation, along with various updates to SonarQube configurations and integration of additional security analysis tools like Cppcheck and Bandit.
- URL: pull/105747
- Merged: No
- Associated Commits: 3a5b4, 81acf, b03b7, 609d5, 1b5d7, 8d7e9, 4c210, d1ad3, 314eb, 6d240, 960de, 2e783, bf7c5, b410a
2. segmentation error reslved: This pull request addresses a segmentation error by adding validation code in process_utils.cc to cap thread count values for hat clamps at a safe maximum of 1024, logs warnings for excessively large values, handles negative inputs by defaulting to auto-detection, and includes a new test file to verify these changes.
- URL: pull/105631
- Merged: No
3. Fix cwe202: This pull request aims to address the security issue CWE-202 by adding validation to the zip extraction process, although it was not merged into the main codebase.
- URL: pull/105633
- Merged: No
- Associated Commits: 8bbf0
Other Closed Pull Requests
- Build system enhancements for FlatBuffers runtime libraries: This set of pull requests focuses on improving the FlatBuffers build system by adding functional C++ and Python runtime libraries, which were previously empty. The changes also enhance the structure and readability of the build configuration to support these new runtime components effectively.
- pull/105736
- Input validation improvements in tf.keras.layers.Dense: These pull requests improve the validation of the
unitsargument in thetf.keras.layers.Denselayer by enforcing that only positive integers are accepted. They prevent silent truncation of float values, disallow zero and None inputs, and raise a clear ValueError for invalid inputs without affecting valid cases. - pull/105737
3.3 Pull Request Discussion Insights
This section will analyze the tone and sentiment of discussions within this project's open and closed pull requests that occurred within the past week. It aims to identify potentially heated exchanges and to maintain a constructive project environment.
Based on our analysis, there are no instances of toxic discussions in the project's open or closed pull requests from the past week.
IV. Contributors
4.1 Contributors
Active Contributors:
We consider an active contributor in this project to be any contributor who has made at least 1 commit, opened at least 1 issue, created at least 1 pull request, or made more than 2 comments in the last month.
If there are more than 10 active contributors, the list is truncated to the top 10 based on contribution metrics for better clarity.
| Contributor | Commits | Pull Requests | Issues | Comments |
|---|---|---|---|---|
| CodersAcademy006 | 47 | 13 | 0 | 9 |
| ashvinashivin0-sketch | 0 | 0 | 22 | 0 |
| kokol16 | 0 | 0 | 22 | 0 |
| Blooming-Tree | 0 | 0 | 22 | 0 |
| qiqicliff | 0 | 0 | 20 | 0 |
| mukilathuruvan | 14 | 1 | 0 | 0 |
| mihaimaruseac | 0 | 0 | 0 | 15 |
| Aaraviitkgp | 5 | 2 | 0 | 6 |
| shank87414 | 0 | 0 | 8 | 0 |
| newcomer119 | 4 | 2 | 0 | 0 |