Skip to main content
Fig. 6 | Journal of Big Data

Fig. 6

From: Instance segmentation on distributed deep learning big data cluster

Fig. 6

Training a deep learning model using the asynchronous parallel method with stochastic gradient descent. The method involves multiple worker nodes independently training different subsets of the data and updating the model parameters asynchronously without coordination. The training process can suffer from slower convergence and increased variability due to the asynchronous updates. However, it can also make more efficient use of computational resources compared to synchronous parallel methods

Back to article page