Fig. 8From: Instance segmentation on distributed deep learning big data clusterEach worker node typically operates on one or more (data, model) RDDs, where each (data, model) RDD contains a subset of the input data and a replica of the model parameters. The number of (data, model) RDDs assigned to each worker node depends on the available resources in the Spark cluster and the configuration settings in BigDLBack to article page