Introduction to
TensorFlow 2.0
Brad Miro - @bradmiro
Google
Spark + AI Summit Europe - October 2019
Deep Learning
Intro to TensorFlow
TensorFlow @ Google
2.0 and Examples
Getting Started
TensorFlow
Deep Learning
Doodles courtesy of @dalequark
Weight
Height
Examples of cats Examples of dogs
rgb(89,133,204)
You have lots of data (~ 10k+ examples)
Use Deep Learning When...
You have lots of data (~ 10k+ examples)
The problem is “complex” - speech, vision, natural language
Use Deep Learning When...
You have lots of data (~ 10k+ examples)
The problem is “complex” - speech, vision, natural language
The data is unstructured
Use Deep Learning When...
You have lots of data (~ 10k+ examples)
The problem is “complex” - speech, vision, natural language
The data is unstructured
You need the absolute “best” model
Use Deep Learning When...
You don’t have a large dataset
Don’t Use Deep Learning When...
You don’t have a large dataset
You are performing sufficiently well with traditional ML methods
Don’t Use Deep Learning When...
You don’t have a large dataset
You are performing sufficiently well with traditional ML methods
Your data is structured and you possess the proper domain knowledge
Don’t Use Deep Learning When...
You don’t have a large dataset
You are performing sufficiently well with traditional ML methods
Your data is structured and you possess the proper domain knowledge
Your model should be explainable
Don’t Use Deep Learning When...
Open source deep learning library
Utilities to help you write neural networks
GPU / TPU support
Released by Google in 2015
>2200 Contributors
2.0 released September 2019
TensorFlow
41,000,000+ 69,000+ 12,000+ 2,200+
downloads commits pull requests contributors
TensorFlow @ Google
AI-powered data
center efficiency
Global localization
in Google Maps
Portrait Mode on
Google Pixel
2.0
Scalable
Tested at Google-scale.
Deploy everywhere
Easy
Simplified APIs.
Focused on Keras and
eager execution
Powerful
Flexibility and performance.
Power to do cutting edge research
and scale to > 1 exaflops
TensorFlow 2.0
Deploy anywhere
JavaScriptEdge devicesServers
TF Probability
TF Agents
Tensor2Tensor
TF Ranking
TF Text
TF Federated
TF Privacy
...
import tensorflow as tf # Assuming TF 2.0 is installed
a = tf.constant([[1, 2],[3, 4]])
b = tf.matmul(a, a)
print(b)
# tf.Tensor( [[ 7 10] [15 22]], shape=(2, 2), dtype=int32)
print(type(b.numpy()))
# <class 'numpy.ndarray'>
You can use TF 2.0 like NumPy
What’s Gone
Session.run
tf.control_dependencies
tf.global_variables_initializer
tf.cond, tf.while_loop
tf.contrib
Specifics
What’s Gone
Session.run
tf.control_dependencies
tf.global_variables_initializer
tf.cond, tf.while_loop
tf.contrib
What’s New
Eager execution by default
tf.function
Keras as main high-level api
Specifics
tf.keras
Fast prototyping, advanced research, and production
keras.io = reference implementation
import keras
tf.keras = TensorFlow’s implementation (a superset, built-in to TF, no
need to install Keras separately)
from tensorflow import keras
Keras and tf.keras
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
For Beginners
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
For Beginners
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
For Beginners
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
For Beginners
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
For Experts
class MyModel(tf.keras.Model):
def __init__(self, num_classes=10):
super(MyModel, self).__init__(name='my_model')
self.dense_1 = layers.Dense(32, activation='relu')
self.dense_2 = layers.Dense(num_classes, activation='sigmoid')
def call(self, inputs):
# Define your forward pass here,
x = self.dense_1(inputs)
return self.dense_2(x)
For Experts
What’s the difference?
Symbolic (For Beginners)
Your model is a graph of layers
Any graph you compile will run
TensorFlow helps you debug by catching errors at compile time
Symbolic vs Imperative APIs
Symbolic (For Beginners)
Your model is a graph of layers
Any graph you compile will run
TensorFlow helps you debug by catching errors at compile time
Imperative (For Experts)
Your model is Python bytecode
Complete flexibility and control
Harder to debug / harder to maintain
Symbolic vs Imperative APIs
tf.function
lstm_cell = tf.keras.layers.LSTMCell(10)
def fn(input, state):
return lstm_cell(input, state)
input = tf.zeros([10, 10]); state = [tf.zeros([10, 10])] * 2
lstm_cell(input, state); fn(input, state) # warm up
# benchmark
timeit.timeit(lambda: lstm_cell(input, state), number=10) # 0.03
Let’s make this faster
lstm_cell = tf.keras.layers.LSTMCell(10)
@tf.function
def fn(input, state):
return lstm_cell(input, state)
input = tf.zeros([10, 10]); state = [tf.zeros([10, 10])] * 2
lstm_cell(input, state); fn(input, state) # warm up
# benchmark
timeit.timeit(lambda: lstm_cell(input, state), number=10) # 0.03
timeit.timeit(lambda: fn(input, state), number=10) # 0.004
Let’s make this faster
@tf.function
def f(x):
while tf.reduce_sum(x) > 1:
x = tf.tanh(x)
return x
# you never need to run this (unless curious)
print(tf.autograph.to_code(f))
AutoGraph makes this possible
def tf__f(x):
def loop_test(x_1):
with ag__.function_scope('loop_test'):
return ag__.gt(tf.reduce_sum(x_1), 1)
def loop_body(x_1):
with ag__.function_scope('loop_body'):
with ag__.utils.control_dependency_on_returns(tf.print(x_1)):
tf_1, x = ag__.utils.alias_tensors(tf, x_1)
x = tf_1.tanh(x)
return x,
x = ag__.while_stmt(loop_test, loop_body, (x,), (tf,))
return x
Generated code
tf.distribution.Strategy
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(64, input_shape=[10]),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
Going big: tf.distribute.Strategy
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(64, input_shape=[10]),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
Going big: Multi-GPU
tensorflow_datasets
# Load data
import tensorflow_datasets as tfds
dataset = tfds.load(‘cats_vs_dogs', as_supervised=True)
mnist_train, mnist_test = dataset['train'], dataset['test']
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
mnist_train = mnist_train.map(scale).batch(64)
mnist_test = mnist_test.map(scale).batch(64)
TensorFlow Datasets
● audio
○ "nsynth"
● image
○ "cifar10"
○ "diabetic_retinopathy_detection"
○ "imagenet2012"
○ "mnist"
● structured
○ "titanic"
● text
○ "imdb_reviews"
○ "lm1b"
○ "squad"
● translate
○ "wmt_translate_ende"
○ "wmt_translate_enfr"
● video
○ "bair_robot_pushing_small"
○ "moving_mnist"
○ "starcraft_video"
More at tensorflow.org/datasets
Transfer Learning
import tensorflow as tf
base_model = tf.keras.applications.SequentialMobileNetV2(
input_shape=(160, 160, 3),
include_top=False,
weights=’imagenet’)
base_model.trainable = False
model = tf.keras.models.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(1)
])
# Compile and fit
Transfer Learning
Image of TensorFlow Hub and serialized
Saved Model
Upgrading
Migration guides
tf.compat.v1 for backwards compatibility
tf_upgrade_v2 script
Upgrading
Getting Started
pip install tensorflow
TensorFlow 2.0
Intro to TensorFlow
for Deep Learning
Introduction to TensorFlow
for AI, ML and DL
coursera.org/learn/introduction-tensorflow udacity.com/tensorflow
New Courses
github.com/orgs/tensorflow/projects/4
Go build.
pip install tensorflow
tensorflow.org
tf.thanks!
Brad Miro - @bradmiro
tensorflow.org
Spark + AI Summit Europe - October 2019
80

Introduction to TensorFlow 2.0