Multi-sample \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta $$\end{document}ζ-mixup: richer, more realistic synthetic samples from a p-series interpolant

Modern deep learning training procedures rely on model regularization techniques such as data augmentation methods, which generate training samples that increase the diversity of data and richness of label information. A popular recent method, mixup, uses convex combinations of pairs of original samples to generate new samples. However, as we show in our experiments, mixup can produce undesirable synthetic samples, where the data is sampled off the manifold and can contain incorrect labels. We propose \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta $$\end{document}ζ-mixup, a generalization of mixup with provably and demonstrably desirable properties that allows convex combinations of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${T} \ge 2$$\end{document}T≥2 samples, leading to more realistic and diverse outputs that incorporate information from \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${T}$$\end{document}T original samples by using a p-series interpolant. We show that, compared to mixup, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta $$\end{document}ζ-mixup better preserves the intrinsic dimensionality of the original datasets, which is a desirable property for training generalizable models. Furthermore, we show that our implementation of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta $$\end{document}ζ-mixup is faster than mixup, and extensive evaluation on controlled synthetic and 26 diverse real-world natural and medical image classification datasets shows that \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\zeta $$\end{document}ζ-mixup outperforms mixup, CutMix, and traditional data augmentation techniques. The code will be released at https://github.com/kakumarabhishek/zeta-mixup.

Given the large parameter space of deep learning models, training on small datasets tends to cause the models to overfit to the training samples.This is especially a problem when training with data from high dimensional input spaces, such as images, because the sampling density is exponentially proportional to 1/D, where D is the dimensionality of the input space [27].As D grows larger (typically 10 4 to 10 6 for most real-world image datasets), we need to increase the number of samples exponentially in order to retain the same sampling density.As a result, it is imperative that the training datasets for these models have a sufficiently large number of samples in order to prevent overfitting.Moreover, deep learning models generally exhibit good generalization performance when evaluated on samples that come from a distribution similar to the training samples' distribution.In addition to their regularization effects to prevent overfitting [30,29], data augmentation techniques also help the training by synthesizing more samples in order to better learn the training distributions.
Traditional image data augmentation techniques include geometric-and inte-nsity-based transformations, such as affine transformations, rotation, scaling, zooming, cropping, adding noise, etc., and are quite popular in the deep learning literature.For a comprehensive review of data augmentation techniques for deep learning methods on images, we refer the interested readers to the survey by Shorten et al. [54].In this paper, we focus on a recent and popular data augmentation technique based on a rather simple idea, which generates a convex combination of a pair of input samples, variations of which are presented as mixup [73], Between-Class learning [59], and SamplePairing [33].The most popular of these approaches, mixup [73], performs data augmentation by generating new training samples from convex combinations of pairs of original samples and linear interpolations of their corresponding labels, leading to new training samples, which are obtained by essentially overlaying 2 images with different transparencies, and new training labels, which are soft probabilistic labels.Other related augmentation methods can broadly be grouped into 3 categories: (a) methods that crop or mask region(s) of the original input image followed by mixup like blending, e.g., CutMix [69] and GridMix [5], (b) methods that generate convex combinations in the learned feature space, e.g., Manifold Mixup [62] and MixFeat [67], and (c) methods that add a learnable component to mixup, e.g., AdaMixUp [25], AutoMix [74], and AutoMix [44].However, mixup can lead to ghosting artifacts in the synthesized samples (as we show later in the paper, e.g., Fig. 3), in addition to generating synthetic samples with wrong class labels.Moreover, because mixup uses a convex combination of only a pair of points, it can lead to the synthetic samples being generated off the original data manifold (Fig. 1 (a)).This in turn leads to an inflation of the manifold, which can be quantified by an increase in the intrinsic dimensionality of the resulting data distribution, as shown in Fig. 4, which is undesirable since it has been shown that deep models trained on datasets with lower dimensionalities generalize better to unseen samples [48].Additionally, mixup-like approaches, which crop or mask regions of the input images, may degrade the training data quality by occluding informative and discriminatory regions of images, which is highly undesirable for high-stakes applications such as medical image analysis tasks.
The primary hypothesis of mixup and many of its derivatives is that a model should behave linearly between any two training samples, even if the distance between samples is large.This implies that we may train the model with synthetic samples that have very low confidence of realism; in effect over-regularizing.We instead argue that a model should only behave linearly nearby training samples and that we should thus only generate synthetic examples with high confidence of realism.To achieve this, we propose ζ-mixup, a generalization of mixup with provably desirable properties that addresses the shortcomings of mixup.ζ-mixup generates new training samples by using a convex combination of N samples in a training batch, requires no custom layers or special training procedures to employ, and is faster than mixup in terms of wall-clock time.We show how, as compared to mixup, the ζmixup formulation allows for generating more realistic and more diverse samples that better conform to the data manifold (Fig. 1 (b)) with richer labels that incorporate information from multiple classes, and that mixup is indeed a special case of ζ-mixup.We show qualitatively and quantitatively on synthetic and real-world datasets that ζ-mixup's output better preserves the intrinsic dimensionality of the data than that of mixup.Finally, we demonstrate the efficacy of ζ-mixup on 24 datasets comprising a wide variety of tasks from natural image classification to diagnosis with several medical imaging modalities.

Method
Vicinal Risk Minimization: Revisiting the concept of risk minimization from Vapnik [61], given X and Y as the input data and the target label distributions respectively, and a family of functions F, the supervised learning setting consists of searching for an optimal function f ∈ F : X → Y, which minimizes the expected value of a given loss function L over the data distribution P (x, y); (x, y) ∈ (X , Y).This expected value of the loss, also known as the expected value of the risk, is given by: R(f ) = L (f (x), y) P (x, y) dx dy.In scenarios when the exact distribution P (x, y) is unknown, such as in practical supervised learning settings with a finite training dataset {x i , y i } m i=1 , the common approach is to minimize the risk w.r.t. the empirical data distribution approximated by using delta functions at each sample, R emp (f ) = 1 m m i=1 L (f (x i ), y i ) , and this is known as empirical risk minimization (ERM).However, if the data distribution is smooth, as is the case with most real datasets, it is desirable to minimize the risk in the vicinity of the provided samples [61,9] i=1 are points sampled from the vicinity of the original data distribution, also known as the vicinal distribution P vic (x, y).This is known as vicinal risk minimization (VRM) and theoretical analysis [61,9,72] has shown that VRM generalizes well when at least one of these two criteria are satisfied: (i) the vicinal data distribution P vic (x, y) must be a good approximation of the actual data distribution P (x, y), and (ii) the class F of functions must have a suitably small capacity.Since modern deep neural networks have up to hundreds of millions of parameters, it is imperative that the former criteria is met.
Data Augmentation: A popular example of VRM is the use of data augmentation for training deep neural networks.For example, applying geometric and intensity-based transformations to images leads to a diverse training dataset allowing the prediction models to generalize well to unseen samples [54].However, the assumption of these transformations that points sampled in the vicinity of the original data distribution share the same class label is rather limiting and does not account for complex interactions (e.g., proximity relationships) between class-specific data distributions in the input space.Recent approaches based on convex combinations of pairs of samples to synthesize new training samples aim to alleviate this by allowing the model to learn smoother decision boundaries [62].Consider the general K-class classification task.mixup [73] synthesizes a new training sample (x, ŷ) from training data samples (x i , y i ) and (x j , y j ) as where λ ∈ [0, 1].The labels y i , y j are converted to one-hot encoded vectors to allow for linear interpolation between pairs of labels.However, as we show in our experiments (Sec.4), mixup leads to the synthesized points being sampled off the data manifold (Fig. 1 (a)).
ζ-mixup Formulation: Going back to the K-class classification task, suppose we are given a set of N points {x i } N i=1 in a D-dimensional ambient space R D with the corresponding labels Keeping in line with the manifold hypothesis [8,22], which states that complex data manifolds in high dimensional ambient spaces are actually made up of samples from manifolds with low intrinsic dimensionalities, we assume that the N points are samples from K manifolds {M i } K i=1 of intrinsic dimensionalities {d i } K i=1 , where d i << D ∀i ∈ [1, K] (Fig.

(a)
).We seek an augmentation method that facilitates a denser sampling of each intrinsic manifold M i , thus generating more real and more diverse samples with richer labels.Following Wood et al. [64,63], we consider three criteria for evaluating the quality of synthetic data: (i) realism: allowing the generation of correctly labeled synthetic samples close to the original samples, ensuring the realism of the synthetic samples, (ii) diversity: facilitating the generation of more diverse synthetic samples by allowing exploration of the input space, and (iii) label richness when generating synthetic samples while still staying on the manifold of realistic samples.Additionally, we aim for: (iv) valid probabilistic labels from combinations of samples along with (v) computationally efficient (e.g., avoiding inter-sample distance calculations) augmentation of training batches.
To this end, we propose to synthesize a new sample (x k , y k ) as where w i s are the weights assigned to the N samples.One such weighting scheme that satisfies the aforementioned requirements consists of sample weights from the terms of a p-series, i.e., w i = i −p , which is a convergent series for p ≥ 1.Since this implies that the weight assigned to the first sample will be the largest, we want to randomize the order of the samples to ensure that the synthetic samples are not all generated near one original sample.Therefore, building upon the idea of local synthetic instances initially proposed for the augmentation of connectome dataset [7], we adopt the following formulation: Given N samples (where 2 ≤ N ≤ m and thus, theoretically, the entire dataset), an N × N random permutation matrix π, and the resulting randomized ordering of samples s = π[1, 2, . . ., N ] T , the weights are defined as where C is the normalization constant and γ is a hyperparameter.As we show in our experiments later, γ allows us to control how far the synthetic samples can stray away from the original samples.Moreover, in order to ensure that y k in Eqn. 2 is a valid probabilistic label, w i must satisfy w i ≥ 0 ∀i and As a result of the aforementioned formulation, ζ-mixup presents two desirable properties that we present in the following 2 theorems (proofs in the Appendix).Theorem 1 states that for all values of γ ≥ γ min , the weight assigned to one sample is greater than the sum of the weights assigned to all the other samples in a batch, thus implicitly introducing the desired notion of linearity in only the locality of the original samples.Theorem 2 states the equivalence of mixup and ζ-mixup and establishes the former as a special case of the latter.

Datasets and Experimental Details
Synthetic Data: We first generate two-class distributions of 2 9 = 512 samples with non-linear class boundaries in the shape of interleaving crescents (CRESCENTS) and spirals (SPIRALS), and add Gaussian noise N (0, 0.1), as shown in the "Input" column of Fig. 2 (a).Next, moving on to higher dimensional spaces, we generate synthetic data distributed along a helix.In particular, we sample 2 13 = 8,192 points off a 1-D helix embedded in R 3 (see the "Input" column of Fig. 2 (b)) and, as a manifestation of low-D manifolds lying in high-D ambient spaces, a 1-D helix in R 12 .

Results and Discussion
We present experimental evaluation on controlled synthetic (1-D manifolds in 2-D and 3-D, 3-D manifolds in 12-D) and on 24 real-world natural and medical image datasets of various modalities.
We evaluate the quality of ζ-mixup's outputs: directly, by assessing the realism, label correctness, diversity, richness [63,64], and preservation of intrinsic dimensionality of the generated samples; as well as indirectly, by assessing the effect of the samples on the performance of downstream classification tasks.

Realism and Label Correctness
While it is desirable that the output of any augmentation method be different from the original data in order to better minimize R vic (Sec.2), we want to avoid sampling synthetic points off the original data          manifold.Applying mixup to CRESCENTS and SPIRALS datasets shows that mixup does not respect the individual class boundaries and synthesizes samples off the data manifold, also known as manifold intrusion [25].This also results in the generated samples being wrongly labeled, i.e., points in the "red" class's region being assigned "blue" labels and vice versa, which we term as "label error".On the other hand, ζ-mixup preserves the class decision boundaries irrespective of the hyperparameter γ and additionally allows for a controlled interpolation between the original distribution and mixup-like output.With ζ-mixup, small values of γ (greater than γ min ; see Theorem 1) lead to samples being generated further away from the original data and as γ increases, the resulting distribution approaches the original data.
Applying mixup in 3D space (Fig. 2 (b)) results in a somewhat extreme case of the generated points sampled off the data manifold, filling up the entire hollow region in between the helical distribution.
ζ-mixup, however, similar to Fig. 2 (a), generates points that are relatively much closer to the original points, and increasing the value of γ to a large value, say γ = 6.0, leads the generated samples to lie almost perfectly on the original data manifold.
Moving on to higher dimensions with the MNIST data, i.e., 784-D, we observe that the problems with mixup's output are even more severe and that the improvements by using ζ-mixup are more conspicuous.For each digit class in the MNIST dataset, we take the first 10 samples as shown in Fig. 3 (a) and use mixup and ζ-mixup to generate 100 new images each (Fig. 3 (b-c)).It is easy to see that the digits in ζ-mixup's output are more discernible than those in mixup's output.
Finally, to analyze the correctness of probabilistic labels in the outputs of mixup and ζ-mixup, we pick 4 samples from each.mixup's outputs (Fig. 3 (d)) all look like images of handwritten "8".The soft label of the first digit in Fig. 3 (d) is [0, 0.53, 0, 0, 0, 0.47, 0, 0, 0, 0], where the i th index is the probability of the i th digit, implying that this output has been obtained by mixing images of digits "1" and "5".Interestingly, neither the resulting output looks like the digits "1" or "5" nor is the digit "8" one of the classes used as input for this image.I.e., there is a disagreement, with mixup, between the appearance of the synthesized image and its assigned label.Similar label error exists in the other images in Fig. 3 (d).On the other hand, there is a clear agreement between the images produced by ζ-mixup and the labels assigned to them (Fig. 3 (e)).Next, we set out to quantify (i) realism and (ii) label correctness of mixup and ζ-mixup-synthesized images.To this end, we assume access to an Oracle that can recognize MNIST digits.For (i), we hypothesize that the more an image is realistic, the more the Oracle will be certain about the digit in it, and vice-versa.
For example, although the first image in Fig. 3 (d) is a combination of a "1" and a "5", the resulting image looks very similar to a realistic handwritten "8".On the other hand, consider the highlighted and zoomed digits in Fig. 3 (b).For an Oracle, images like these are ambiguous and do not belong to one particular class.Consequently, the uncertainty of the Oracle's prediction will be high.We therefore adopt the Oracle's entropy (H) as a proxy for realism.For (ii), we use cross entropy (CE) to compare the soft labels assigned by either mixup or ζ-mixup to the label assigned by the Oracle.For example, if the resulting digit in a synthesized image is deemed an "8" to an Oracle and the label assigned to the sample, by mixup or ζ-mixup, is also "8", then the CE is low and the label is correct.We also note that for the Oracle, the certainty of the predictions is correlated with the correctness of label.Finally, to address the issue of what Oracle to use, we adopt a highly accurate LeNet-5 [42] MNIST digit classifier that achieves 99.31% classification accuracy on the standardized MNIST test set.
Fig. 3 (f) and (g) show the quantitative results for the realism (∝ 1/H) of mixup and ζ-mixup's outputs, and the correctness of the corresponding labels (∝ 1/CE) as evaluated by the Oracle, respectively, using kernel density estimate (KDE) plots with normalized areas.For both metrics, lower values (along the horizontal axes) are better.In Fig. 3 (f), we observe the ζ-mixup has a higher peak for low values of entropy as compared to mixup, indicating that the former generates more realistic samples.The inset figure therein shows the same plot with a logarithmic scale for the density, and ζ-mixup's improvements over mixup for higher values of entropy are clearly discernible here.Similarly, in Fig. 3 (g), we see that the cross entropy values for ζ-mixup are concentrated around 0, whereas those for mixup are spread out more widely, implying that the former produces fewer samples with label error.If we restrict our samples to only those whose entropy of Oracle's predictions was less than 0.1, meaning they were highly realistic samples, the label correctness distribution remains similar as shown in the inset figure, i.e., mixup's outputs that look realistic are more likely to exhibit label error.

Diversity
We can control the diversity of ζ-mixup's output by changing N , i.e., the number of points used as input to ζ-mixup, and the hyperparameter γ.As the value of γ increases, the resulting distribution of the sampled points approaches the original data distribution.For example, in Fig. 2 (a), we see that changing γ leads to an interpolation between mixup-like and the original input-like distributions.
Similarly, in Fig. 2 (c), we can see the effects of varying the batch size N (i.e., the number of input samples used to synthesize new samples) and γ.As N increases, more original samples are used to generate the synthetic samples, and therefore the synthesized samples allow for a wider exploration of the space around the original samples.This effect is more pronounced with smaller values of γ because with the weight assigned to one point, while still dominating all other weights, is not large enough to pull the synthetic sample close to it.This, along with fewer points to compute the weighted average of, leads to samples being generated farther from the original distribution as γ decreases.
On the other hand, as γ increases, the contribution of one sample gets progressively larger, and as a result, the effect of a large γ overshadows the effect of N .

Richness of Labels
The third desirable property of synthetic data is that, not only the generated samples should be able to capture and reflect the diversity of the original dataset, but also build upon it and extend it.
As discussed in Sec. 2, for a single value of λ, mixup generates 1 synthetic sample for every pair of original samples.In contrast, given a single value of γ and N original samples, ζ-mixup can generate N !new samples.The richness of the generated labels in ζ-mixup comes from the fact that, unlike mixup whose outputs lie anywhere on the straight line between the original 2 samples, ζ-mixup generates samples which are close to the original samples (as discussed in "Realism" above) while still incorporating information from the original N samples.As a case in point, consider the visualization of the soft labels in mixup's and ζ-mixup's outputs on the MNIST dataset.Examining Fig. 3 (b,d) again, we note mixup's outputs are only made up of inputs from at most 2 classes.On the other hand, because of ζ-mixup's formulation, the outputs of ζ-mixup can be made up of inputs from up to min (N, K) classes.This can also be seen in ζ-mixup's outputs in Fig. 3 (e): while the probability of one class dominates all others (see Theorem 1), inputs from multiple classes, in addition to the dominant class, contribute to the final output and therefore this is reflected in the soft labels, leading to richer labels with information from multiple classes in 1 synthetic sample, which in turn arguably allow models trained on these samples to better learn the class decision boundaries.

Preserving the Intrinsic Dimensionality of the Original Data
As a direct consequence of the realism of synthetic data discussed above and its relation to the data manifold, we evaluate how the intrinsic dimensionality (ID hereafter) of the datasets change when mixup and ζ-mixup are applied.With our 3D manifold visualizations in Fig. 2 (b), we saw that mixup samples points off the data manifold while ζ-mixup limits the exploration of the high dimensional space, thus maintaining a lower ID.In order to substantiate this claim with quantitative results, we estimate the IDs of several datasets, both synthetic and real-world, and compare how the IDs of mixupand ζ-mixup-generated distributions compare to those of the respective original distributions.For synthetic data, we use the high dimensional datasets described in Sec. 3, i.e., 1-D helical manifolds embedded in R 3 and in R 12 .For real-world datasets, we use the entire training partitions (50,000 images) of CIFAR-10 and CIFAR-100 datasets.For each point in all the 4 datasets, the local measure of the ID (local ID hereafter) is calculated using a k-nearest neighborhood around each point with k = 8 and k = 128 [4,23].The means and the standard deviations of the local ID estimates for all the datasets: original data distribution, mixup's output, and ζ-mixup's outputs for γ ∈ [0, 15], are visualized in Fig. 4.
The results in Fig. 4 support the observations from the discussion around the realism (Sec.4.1) and the diversity (Sec.4.2) of outputs.In particular, notice how mixup's off-manifold sampling leads to an inflated estimate of the local ID, whereas the local ID of ζ-mixup's output is lower than that of mixup and, as expected, can be controlled using γ.This difference is even more apparent with real-world high dimensional (3072-D) datasets, i.e., CIFAR-10 and CIFAR-100, where for all values of γ ≥ γ min (Theorem 1), as γ increases, the local ID of ζ-mixup's output drops dramatically, meaning the resulting distributions lie on progressively lower dimensional intrinsic manifolds.

Evaluation on Downstream Task: Classification
Table 1 contains the performance evaluation of models trained using traditional data augmentation techniques, e.g., rotation, flipping, and cropping, ("ERM"), and mixup's and ζ-mixup's outputs from natural image datasets.For ζ-mixup, we choose 3 values of γ: 2.4 (to allow exploration of the space around the original data manifold), 4.0 (to restrict the synthetic samples to be close to the original samples), and 2.8 (to allow for a behavior that permits exploration while still restricting the points to a small region around the original distribution).We see that 17 of the 18 models in Table 1 trained with ζ-mixup outperform their ERM and mixup counterparts, with the lone exception being a model that is as accurate as mixup.Next, Table 2 shows the performance of the models on the 10 skin lesion image diagnosis datasets (γ = {2.4,2.8, 4.0}).For both ResNet-18 and ResNet-50 and for all the 10 SKIN datasets, ζ-mixup outperforms both mixup and ERM on skin lesion diagnosis tasks.Finally, Table 3 presents the quantitative evaluation on the 8 classification datasets from the MedMNIST collection, but use ζ-mixup only with γ = 2.8.In 6 out of the 8 datasets, ζ-mixup outperforms both mixup and ERM, and in the other 2, ζ-mixup achieves the highest value for 1 metric out of 2 each.
Note that these selected values of γ can be changed to other reasonable values (please see the Appendix for sensitivity analysis of γ), and as shown above qualitatively and quantitatively, the desirable properties of ζ-mixup hold for all values of γ ≥ γ min .Consequently, our quantitative results on classification tasks on 24 datasets show that ζ-mixup outperforms ERM and mixup for all the datasets and in most cases, using all the 3 selected values of γ.

Computational Efficiency
ζ-mixup's PyTorch [47] implementation is provided in the Appendix.Our benchmarking experiments (Appendix) show that training DNNs for downstream tasks (Sec.4.5) with ζ-mixup is at least as fast as mixup, and for augmenting batches of 32 RGB images of 224 × 224 resolution, ζ-mixup is over 2× faster than mixup.

Conclusion
We proposed ζ-mixup, a multi-sample generalization of the popular mixup technique for data augmentation that uses the terms of a truncated Riemann zeta function to combine N ≥ 2 samples of original dataset.We presented theoretical proofs that mixup is a special case of ζ-mixup (when N =2 and with a specific setting of ζ-mixup's hyperparameter γ) and that the ζ-mixup formulation allows for the weight assigned to one sample to dominate all the others, thus ensuring the synthesized samples are on or close to the original data manifold.The latter property leads to generating samples that are more realistic and, along with allowing N > 2, generates more diverse samples with richer labels as compared to their mixup counterparts.We presented extensive experimental evaluation on controlled synthetic (1-D manifolds in 2-D and 3-D; 3-D manifolds in 12-D) and 24 real-world (natural and medical) image datasets of various modalities.We demonstrated quantitatively that, compared to mixup: ζ-mixup better preserves the intrinsic dimensionality of the original datasets; provides higher levels of realism and label correctness; and achieves stronger performance (i.e., higher accuracy) on multiple downstream classification tasks.Future work will include exploring ζ-mixup in the learned feature space, although opinions on the theoretical justifications for interpolating in the latent space are not yet converged [11]. where For this to be equivalent to mixup (Eqn.1), we should have Solving for γ, we have

B ζ-mixup: Implementation and Benchmarking
The ζ-mixup implementation in PyTorch [47] is shown in Listing A1 and in the Appendix_utils.pyfile.Unlike mixup which performs scalar multiplications of λ and 1 − λ with the input batches, ζ-mixup performs a single matrix multiplication of the input batches with the weights.With our optimized implementation, we find that model training times using ζ-mixup are as fast as, if not faster than, those using mixup when evaluated on datasets with different spatial resolutions: CIFAR-10 (32 × 32 RGB images), STL-10 (96 × 96 RGB images), and Imagenette (224 × 224 RGB images), as shown in Table A1.Moreover, when using mixup and ζ-mixup on a batch of 32 tensors of 224 × 224 spatial resolution with 3 feature channels, which is the case with popular ImageNet-like training regimes, ζ-mixup is over twice as fast as mixup and over 110 times faster than the original local synthetic instances implementation [7].
All models were trained and benchmarked on a workstation with Intel Core i9-9900K and 32 GB of memory with the Nvidia GeForce GTX TITAN X GPU with 12 GB of memory.

D.2 Skin Lesion Image Diagnosis Datasets
Skin lesion imaging has 2 pre-dominant modalities: clinical images and dermoscopic images.While both capture RGB images, clinical images consist of close-up lesion images acquired with consumergrade cameras, whereas dermoscopic images are acquired using a dermatoscope which allows for identification of detailed morphological structures [45] along with fewer imaging-related artifacts [37].
For all the 10 skin lesion image diagnosis datasets, we train classification models with the ResNet-18 and the ResNet-50 architectures.For data augmentation, we take a square center-crop of the image with edge length equal to 0.8*min(height, width) and then resize it to 256 × 256 spatial resolution.
The ISIC 2016, 2017, and 2018 come with standardized partitions that we use for training and evaluating our models, and for the other 7 datasets, we perform a stratified split in the ratio of training : validation : testing :: 70 : 10 : 20.For all the datasets, we use the 5-class diagnosis labels used in the original dataset paper and in the literature [35,17,2]: "basal cell carcinoma", "nevus", "melanoma", "seborrheic keratosis", and "others".
For all the datasets except ISIC 2018, we use a batch size of 32 images and train the models for 50 epochs with an initial learning rate of 0.01, which was decayed by a multiplicative factor of 0.1 every 10 epochs.Given that the ISIC 2018 dataset is considerably larger, we train it for 20 epochs with 32 images in a batch and an initial learning rate of 0.01, which was decayed by a multiplicative factor of 0.1 every 4 epochs.As with experiments with the natural image datasets, all models are optimized using cross entropy loss and SGD with Nesterov momentum of 0.9 and a weight decay of 5e−4.

D.3 Datasets of Other Medical Imaging Modalities from MedMNIST
For all the 8 datasets from the MedMNIST collection, we train and evaluate classification models with the ResNet-18 architecture on the standard training, validation, and testing partitions.The images are used in their original 28 × 28 spatial resolution.
For all the datasets, we use a learning rate of 0.01 and following the original paper [68], we use cross entropy loss with SGD on batches of 128 images to optimize the classification models.

F Detailed Quantitative Results on Skin Lesion Diagnosis Datasets
Due to space constraints, only the balanced accuracy values for SKIN are reported in the paper (Table 2).Table A3 lists balanced accuracy (ACC bal ) and micro-and macro-averaged F1 scores (F1-micro and F1-macro respectively) for all the models trained and evaluated on SKIN.
Table A3: Classification performance evaluated on SKIN.† and ‡ denote dermoscopic and clinical skin lesion images respectively.The evaluation metrics are ACC bal , F1-micro, and F1-macro.The highest and the second highest values of each metric have been formatted with bold and underline respectively.

Figure 1 :
Figure 1: Overview of mixup (b) and ζ-mixup (a, c).The original and synthesized samples are denoted by • and respectively, and line segments indicate which original samples were used to create the new ones.The line thicknesses denote the relative weights assigned to original samples.Observe how ζ-mixup can mix any number of samples (e.g., 3 in (a) and 4 or 8 in (c)), and that ζ-mixup's formulation allows the generated samples to be close to the original distribution while still incorporating rich information from several samples.

N i=1 w i = 1 .
Accordingly, we use L 1 -normalization and C = N j=1 j −γ is the N -truncated Riemann zeta function [50] ζ(z) evaluated at z = γ, and call our method ζ-mixup.An illustration of ζ-mixup for N = 3, D = 3, d 1 = d 2 = d 3 = 2 is shown in Fig. 1(a).Notice how despite generating convex combinations of samples from disjoint manifolds, the resulting synthetic samples are close to the original ones.A similar observation can be made for N = 4 and N = 8 is shown in Fig. 1(c).Since there exist N !possible N × N random permutation matrices, given N original samples, ζ-mixup can synthesize N !new samples for a single value of γ, as compared to mixup which can only synthesize 1 new sample per sample pair for a single value of λ.

Figure 2 :
Figure 2: Visualizing how mixup and ζ-mixup synthesize new samples.Notice that mixup produces samples that (a) are assigned wrong labels and (b) are sampled off the original data manifold, with an extreme example being where the points are sampled from the hollow region in the helix.A moderately low value of γ allows for a more reasonable exploration of the data manifold, with higher values of N allowing more diversity in the synthesized points.
Soft labels of ζmixup outputs.
log (Density) (f) Measuring realism of samples.Samples with Entropy < 0.1 (g) Measuring correctness of labels.

Figure 3 :
Figure3: Visualizing the results obtained using mixup and ζ-mixup on images from the MNIST dataset.In (d) and (e), we visualize the probabilistic "soft" labels assigned to images generated by mixup and ζ-mixup respectively.Notice how all images in (d) look close to the digit "8" while their assigned soft labels do not contain the class "8".ζ-mixup alleviates this issue and the soft labels in (e) correspond exactly to the class the synthesized images belong to.Also note how mixup produces images with a wrong label, i.e., a label different from the original labels of the 2 images it is interpolated from.In (f) and (g), we evaluate the realism of mixup's and ζ-mixup's generated samples and the correctness of the corresponding labels by measuring the entropy of the Oracle's predictions (H) and the cross entropy of the Oracle's predictions with the soft labels (CE) respectively.For both (f) and (g), lower values are better.

Figure 4 :
Figure 4: Visualizing how ζ-mixup affects the local intrinsic dimensionality of synthesized datasets distributed as 1D helices ((a) and (b)) and 3D manifold ((c)) in a higher dimensional embedding space as the hyperparameter γ changes.The mean and the standard deviation of the intrinsic dimensionality are shown using lines (bold or dashed-dotted) and shaded bands respectively.The vertical dotted line in all the plots denotes the value of γ = γ min (Theorem 1).

Table 2 :
Classification performance (ACC bal ) evaluated on SKIN.The highest and the second highest values of ACC bal have been formatted with bold and underline respectively.

Table A1 :
[28]hmarking ζ-mixup against mixup for training models on CIFAR-10, STL-10, Imagenette, and for augmenting a batch of 32 RGB images of 224 × 224 spatial resolution.While the ID of a dataset can be estimated globally, datasets can have heterogenous regions and thus consist of regions of varying IDs.As such, instead of a global estimate of the ID, a local measure of the ID (local ID hereafter), estimated in the local neighborhood of each point in the dataset with neighborhoods typically defined using the k-nearest neighbors, is more informative of the inherent organization of the dataset.For our local ID estimation experiments, we use a principal component analysis-based local ID estimator from the scikit-dimension Python library[4]using the Fukunaga-Olsen method[23], where an eigenvalue is considered significant if it is larger than 5% of the largest eigenvalue.MNIST have 28 × 28 grayscale images.CIFAR-10 and CIFAR-100 datasets which have RGB images with 32 × 32 spatial resolution.STL-10 consists of RGB images with a higher 96 × 96 resolution and also has fewer training images than testing images per class.Released by Jeremy Howard to facilitate evaluation on natural images from the original ImageNet dataset[19]but with more reasonable computational and time requirements, Imagenette and Imagewoof[31]are 10-class subsets each of the ImageNet dataset.The list of ImageNet classes and the corresponding synset IDs from WordNet in both these datasets are shown in TableA2.Both the datasets have standardized training and validation partitions.For all the 6 natural image datasets: CIFAR-10, CIFAR-100, F-MNIST, STL-10, Imagenette, and Imagewoof, we train and validate deep models with the ResNet-18 architecture[28]on the standard training and validation partitions and use random horizontal flipping for data augmentation.For CIFAR-10, CIFAR-100, F-MNIST, and STL-10, the models are trained on the original image resolutions, whereas for Imagenette and Imagewoof, the images are resized to 224 × 224.For CIFAR-10, CIFAR-100, F-MNIST, the models are trained for 200 epochs with an initial learning rate of 0.1, which is decayed by a multiplicative factor of 0.2 at 80 th , 120 th , and 160 th epochs, with batches of 128 images for CIFAR datasets and 32 images for F-MNIST.For STL-10, the models are trained for 120 epochs with a batch size of 32 and an initial learning rate of 0.1, which is decayed by a multiplicative factor of 0.2 at 80 th epoch.Finally, for Imagenette and Imagewoof, the models are trained for 80 epochs with a batch size of 32 and an initial learning rate of 0.01, which is decayed by a multiplicative factor of 0.2 at 25 th , 50 th , and 65 th epochs.All models are optimized using cross entropy loss and mini-batch stochastic gradient descent (SGD) with Nesterov momentum of 0.9 and a weight decay of 5e−4.

Table A2 :
List of classes from ImageNet and the corresponding WordNet synset IDs in Imagenette and Imagewoof datasets.