October 7, 2019 What’s Coming in TensorFlow 2.0

TensorFlow is an open-source computational framework for building Machine Learning models. TensorFlow was developed by the Google Brain team for internal Google use; it was released under the Apache License 2.0 on November 9, 2015.

What is new in TensorFlow 2.0

There are multiple changes in TensorFlow 2.0 with a focus on making it more production-effective. TensorFlow removes redundant APIs,makes them more consistent (Unified RNNs, Unified Optimizers), and better integrates with Python runtime with Eager execution.

  • Eager Execution
    To build a Neural network in TF1.0 we need to define a structure called Graph. A graph is nothing but a series of mathematical operations arranged into a graph of nodes. Let’s try to visualize a computation graph for an equation z = x^2+y^2 with values of x , y = {3,4}. First we initialize 2 constants x, y with value 3, 4 then we compute the squares for both x and y, then finally we add the square of both constants(x,y) and assign the result to variable z.
Computational graph for equation z = x2+y2

A Graph contains a set of tf.Operation objects, which represent units of computation, and tf.Tensor objects, which represent the units of data that flow between operations. Graph just defines the computation but doesn’t compute anything. A session allows to execute graphs or part of graphs. It allocates resources (on one or more machines) for that and holds the actual values of intermediate results and variables.

In TF2.0 the concept of session creation and running the computational graph is removed and an imperative programming environment – eager execution – is introduced which evaluates operations immediately without building graphs. Operations return concrete values instead of constructing a computational graph to run later. This makes it easy to get started with TensorFlow and debug models, and it reduces boilerplate as well.
In Tensorflow 2.0, eager execution is enabled by default  tf.executing_eagerly()

Code based comparison between TF 1.x and TF 2.0

As you can see from the above code snippets, in TF 2.0 there is no need to create a session for computing the addition operation.

  • @tf.function, AutoGraph
    TensorFlow 2.0 brings together the ease of eager execution and power of TF 1.0. At the center of this merger is tf.function, which allows you to transform a subset of Python syntax into portable, high-performance TensorFlow graphs. 

    A cool new feature of tf.function ia AutoGraph which lets you write code in more pythonic way. For a list of Python features you can use with AutoGraph refer Here.

    We can use Python control flow statements when using tf.function and AutoGraph will convert them into appropriate TensorFlow ops. For example, if statements will be converted into tf.cond() if they depend on a Tensor.
@tf.function
def square_if_positive(x):
  if x > 0:
    x = x * x
  else:
    x = 0
  return x


print('square_if_positive(2) = {}'.format(square_if_positive(tf.constant(2))))
print('square_if_positive(-2) = {}'.format(square_if_positive(tf.constant(-2))))

Output

square_if_positive(2) = 4
square_if_positive(-2) = 0

AutoGraph supports common Python statements like while, for, if, break, continue and return, with support for nesting. That means you can use Tensor expressions in the condition of while and if statements, or iterate over a Tensor in a for loop.

For more details about tf.function refer here and for AutoGraph refer here

  • No more globals
    TensorFlow 1.x relied heavily on global namespaces. You had to call tf.variable to put variables into the default graph and if you lost track of the variables you could recover them using tf.Variable but only if you knew the name of the variable that it had been created with. This is very difficult if you are not in control of the variable’s creation, so in Tf2.0 this was eliminated (Variables 2.0 RFC). There are no more global variables, if you lose track of a tf.Variable it gets garbage collected.
  • API cleanup
    Many APIs are removed in TensorFlow2.0 such as tf.flags, tf.app and tf.logging in favor of the now open-source absl-py,rehoming projects that lived in tf.contrib, and cleaning up the main tf.* namespace by moving lesser used functions into subpackages like tf.math. Some APIs have been replaced with their 2.0 equivalents – tf.summary, tf.keras.metrics, and tf.keras.optimizers.
  • Robust model deployment on any platform
    TensorFlow has always provided a direct path to deployment. TensorFlow lets you train and deploy your model easily, irrespective of language or platform you use. In TensorFlow 2.0 there is improvement in compatibility and parity across platforms and components by standardizing exchange formats and aligning APIs.

    Once you’ve trained and saved your model, you can execute it directly in your application or serve it using one of the deployment libraries: {TensorFlow Serving, TensorFlow Lite, TensorFlow.js}

Conclusion:

In this article we briefed upon major changes in TensorFlow 2.0 for making development of Machine Learning applications a lot easier. I hope you enjoyed reading this article.

Happy Machine Learning using TF2.0 and stay tuned for diving deeper into the next series of tutorials to learn and build with TF 2.0

For the members to Get started with Tensor Flow 2 article here.


rkhemka

Guest Blogger



UNLEASH THE GIG ECONOMY. START A PROJECT OR TALK TO SALES
Close

Sign up for the Topcoder Monthly Customer Newsletter

Thank you

Your information has been successfully received

You will be redirected in 10 seconds