Skip to main content
Fig. 9 | Journal of Big Data

Fig. 9

From: Multi-sample \(\zeta \)-mixup: richer, more realistic synthetic samples from a p-series interpolant

Fig. 9

Comparing \(\zeta \)-mixup to manifold mixup on nine natural and medical image datasets spanning two model architectures, multiple medical imaging modalities, and image types (RGB and grayscale). All models trained with manifold mixup are optimized for \(10\times \) the number of epochs compared to their \(\zeta \)-mixup counterparts. We use the same metrics for evaluation as reported in Tables 4, 6 and 7. The dotted lines connecting the pairs of metric values for \(\zeta \)-mixup and manifold mixup are color-coded: green indicates that the metric is higher for the model trained with manifold mixup, and red denotes vice versa. The metrics reported here are the mean values of three runs for each model. For all metrics, higher values are better. Note that despite being an order of magnitude more computationally expensive, manifold mixup does not consistently outperform \(\zeta \)-mixup

Back to article page