This lab provides hands-on practice using Google Cloud’s AI Platform to perform distributed training using the MirroredStrategy found within tf.keras.
MirroredStrategy
tf.keras
This strategy allows the use of the synchronous AllReduce strategy on a virtual machine with multiple GPUs attached.
GPUs
You’ll start by setting up the environment,
then continue on to create a deep neural network model using the Fashion MNIST dataset,
and then, finally, you’ll train that model using a MultiWorkerMirroredStrategy running on multiple GPUs.
MultiWorkerMirroredStrategy