TensorFlow 2.1.0
Release 2.1.0
TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.
Major Features and Improvements
- The
tensorflowpip package now includes GPU support by default (same astensorflow-gpu) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs.tensorflow-gpuis still available, and CPU-only packages can be downloaded attensorflow-cpufor users who are concerned about package size. - Windows users: Officially-released
tensorflowPip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new/d2ReducedOptimizeHugeFunctionscompiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website here.- This does not change the minimum required version for building TensorFlow from source on Windows, but builds enabling
EIGEN_STRONG_INLINEcan take over 48 hours to compile without this flag. Refer toconfigure.pyfor more information aboutEIGEN_STRONG_INLINEand/d2ReducedOptimizeHugeFunctions. - If either of the required DLLs,
msvcp140.dll(old) ormsvcp140_1.dll(new), are missing on your machine,import tensorflowwill print a warning message.
- This does not change the minimum required version for building TensorFlow from source on Windows, but builds enabling
- The
tensorflowpip package is built with CUDA 10.1 and cuDNN 7.6. tf.keras- Experimental support for mixed precision is available on GPUs and Cloud TPUs. See usage guide.
- Introduced the
TextVectorizationlayer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example. - Keras
.compile.fit.evaluateand.predictare allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope. - Experimental support for Keras
.compile,.fit,.evaluate, and.predictis available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models). - Automatic outside compilation is now enabled for Cloud TPUs. This allows
tf.summaryto be used more conveniently with Cloud TPUs. - Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
- Support for
.fit,.evaluate,.predicton TPU using numpy data, in addition totf.data.Dataset. - Keras reference implementations for many popular models are available in the TensorFlow Model Garden.
tf.data- Changes rebatching for
tf.data datasets+ DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas. tf.data.Datasetnow supports automatic data distribution and sharding in distributed environments, including on TPU pods.- Distribution policies for
tf.data.Datasetcan now be tuned with 1.tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA)2.tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)
- Changes rebatching for
tf.debugging- Add
tf.debugging.enable_check_numerics()andtf.debugging.disable_check_numerics()to help debugging the root causes of issues involving infinities andNaNs.
- Add
tf.distribute- Custom training loop support on TPUs and TPU pods is avaiable through
strategy.experimental_distribute_dataset,strategy.experimental_distribute_datasets_from_function,strategy.experimental_run_v2,strategy.reduce. - Support for a global distribution strategy through
tf.distribute.experimental_set_strategy(),in addition tostrategy.scope().
- Custom training loop support on TPUs and TPU pods is avaiable through
TensorRT- TensorRT 6.0 is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as
tf.experimental.tensorrt.Converter.
- TensorRT 6.0 is now supported and enabled by default. This adds support for more TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D, MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the TensorFlow-TensorRT python conversion API is exported as
- Environment variable
TF_DETERMINISTIC_OPShas been added. When set to "true" or "1", this environment variable makestf.nn.bias_addoperate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. SettingTF_DETERMINISTIC_OPSto "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.
Breaking Changes
- Deletes
Operation.traceback_with_start_linesfor which we know of no usages. - Removed
idfromtf.Tensor.__repr__()asidis not useful other than internal debugging. - Some
tf.assert_*methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during thesession.run(). This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys infeed_dictargument tosession.run(), an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). - The following APIs are not longer experimental:
tf.config.list_logical_devices,tf.config.list_physical_devices,tf.config.get_visible_devices,tf.config.set_visible_devices,tf.config.get_logical_device_configuration,tf.config.set_logical_device_configuration. tf.config.experimentalVirtualDeviceConfigurationhas been renamed totf.config.LogicalDeviceConfiguration.tf.config.experimental_list_deviceshas been removed, please use
tf.config.list_logical_devices.
Bug Fixes and Other Changes
tf.data- Fixes concurrency issue with
tf.data.experimental.parallel_interleavewithsloppy=True. - Add
tf.data.experimental.dense_to_ragged_batch(). - Extend
tf.dataparsing ops to supportRaggedTensors.
- Fixes concurrency issue with
tf.distribute- Fix issue where GRU would crash or give incorrect output when a
tf.distribute.Strategywas used.
- Fix issue where GRU would crash or give incorrect output when a
tf.estimator- Added option in
tf.estimator.CheckpointSaverHookto not save theGraphDef. - Moving the checkpoint reader from swig to pybind11.
- Added option in
tf.keras- Export
depthwise_conv2dintf.keras.backend. - In Keras Layers and Models, Variables in
trainable_weights,non_trainable_weights, andweightsare explicitly deduplicated. - Keras
model.load_weightsnow acceptsskip_mismatchas an argument. This was available in external Keras, and has now been copied over totf.keras. - Fix the input shape caching behavior of Keras convolutional layers.
Model.fit_generator,Model.evaluate_generator,Model.predict_generator,Model.train_on_batch,Model.test_on_batch, andModel.predict_on_batchmethods now respect therun_eagerlyproperty, and will correctly run usingtf.functionby default. Note thatModel.fit_generator,Model.evaluate_generator, andModel.predict_generatorare deprecated endpoints. They are subsumed byModel.fit,Model.evaluate, andModel.predictwhich now support generators and Sequences.
- Export
tf.lite- Legalization for
NMSops in TFLite. - add
narrow_rangeandaxistoquantize_v2anddequantizeops. - Added support for
FusedBatchNormV3in converter. - Add an
errno-like field toNNAPIdelegate for detectingNNAPIerrors for fallback behaviour. - Refactors
NNAPIDelegate to support detailed reason why an operation is not accelerated. - Converts hardswish subgraphs into atomic ops.
- Legalization for
- Other
- Critical stability updates for TPUs, especially in cases where the XLA compiler produces compilation errors.
- TPUs can now be re-initialized multiple times, using
tf.tpu.experimental.initialize_tpu_system. - Add
RaggedTensor.merge_dims(). - Added new
uniform_row_lengthrow-partitioning tensor toRaggedTensor. - Add
shapearg toRaggedTensor.to_tensor; Improve speed ofRaggedTensor.to_tensor. tf.io.parse_sequence_exampleandtf.io.parse_single_sequence_examplenow support ragged features.- Fix
while_v2with variables in custom gradient. - Support taking gradients of V2
tf.condandtf.while_loopusingLookupTable. - Fix bug where
vectorized_mapfailed on inputs with unknown static shape. - Add preliminary support for sparse CSR matrices.
- Tensor equality with
Nonenow behaves as expected. - Make calls to
tf.function(f)(),tf.function(f).get_concrete_functionandtf.function(f).get_initialization_functionthread-safe. - Extend
tf.identityto work with CompositeTensors (such as SparseTensor) - Added more
dtypesand zero-sized inputs toEinsumOp and improved its performance - Enable multi-worker
NCCLall-reduceinside functions executing eagerly. - Added complex128 support to
RFFT,RFFT2D,RFFT3D,IRFFT,IRFFT2D, andIRFFT3D. - Add
pforconverter forSelfAdjointEigV2. - Add
tf.math.ndtriandtf.math.erfinv. - Add
tf.config.experimental.enable_mlir_bridgeto allow using MLIR compiler bridge in eager model. - Added support for MatrixSolve on Cloud TPU / XLA.
- Added
tf.autodiff.ForwardAccumulatorfor forward-mode autodiff - Add
LinearOperatorPermutation. - A few performance optimizations on
tf.reduce_logsumexp. - Added multilabel handling to
AUCmetric - Optimization on
zeros_like. - Dimension constructor now requires
Noneor types with an__index__method. - Add
tf.random.uniformmicrobenchmark. - Use
_protogensuffix for proto library targets instead of_cc_protogensuffix. - Moving the checkpoint reader from
swigtopybind11. tf.device&MirroredStrategynow supports passing in atf.config.LogicalDevice- If you're building Tensorflow from source, consider using bazelisk to automatically download and use the correct Bazel version. Bazelisk reads the
.bazelversionfile at the root of the project directory.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron Ma, AbdüLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, 王振华 (Zhenhua Wang), 韩董, 이중건 Isaac Lee