Fig. 4From: Accelerating neural network training with distributed asynchronous and selective optimization (DASO)Local Update (Broadcast) Schematic of the local update step to be performed after the global synchronization step shown in Fig. 3. The group member responsible for the global communication, in this case GPU:A, sends its network parameters to all other node-local GPUs, which replace the old parameters on those GPUsBack to article page