Keras Multi-GPU and Distributed Training Mechanism with Examples - DataFlair
arXiv:1802.05799v3 [cs.LG] 21 Feb 2018
GitHub - sayakpaul/tf.keras-Distributed-Training: Shows how to use MirroredStrategy to distribute training workloads when using the regular fit and compile paradigm in tf.keras.
Multi-GPU and distributed training using Horovod in Amazon SageMaker Pipe mode | AWS Machine Learning Blog
TensorFlow Framework & GPU Acceleration | NVIDIA Data Center