Skip to main content
Fig. 3 | Journal of Big Data

Fig. 3

From: Multi-sample \(\zeta \)-mixup: richer, more realistic synthetic samples from a p-series interpolant

Fig. 3

Visualizing the results obtained using mixup  (b) and \(\zeta \)-mixup  (c) on images (a) from the MNIST dataset. In d and e, we visualize the probabilistic “soft” labels assigned to images generated by mixup  and \(\zeta \)-mixup  respectively. Notice how all images in d look close to the digit “8” while their assigned soft labels do not contain the class “8”. \(\zeta \)-mixup  alleviates this issue and the soft labels in e correspond exactly to the class the synthesized images belong to. Also note how mixup  produces images with a wrong label, i.e., a label different from the original labels of the two images it is interpolated from. In f and g, we evaluate the realism of mixup ’s and \(\zeta \)-mixup ’s generated samples and the correctness of the corresponding labels by measuring the entropy of the Oracle’s predictions (\({\mathcal {H}}\)) and the cross entropy of the Oracle’s predictions with the soft labels (CE) respectively. For both f and g, lower values are better

Back to article page