Method | Network type | Method type | Description |
---|---|---|---|
CNN | Data | ROS of minority classes until class balance is achieved | |
RUS [23] | CNN | Data | RUS of majority classes until class balance is achieved |
CNN | Data | Pre-training with RUS or ROS, then fine-tuning with all data | |
Dynamic sampling [21] | CNN | Data | Sampling rates adjust throughout training based on previous iteration’s class-wise F1-scores |
MFE and MSFE loss [18] | MLP | Algorithm | New loss functions allow positive and negative classes to contribute to loss equally |
CNN | Algorithm | New loss function down-weights easy-to-classify samples, reducing their impact on total loss | |
CSDNN [89] | MLP | Algorithm | CE loss function modified to incorporate a pre-defined cost matrix |
CoSen CNN [19] | CNN | Algorithm | Cost matrix is learned through backpropagation and incorporated into output layer |
CSDBN-DE [90] | DBN | Algorithm | Cost matrix is learned through evolutionary algorithm and incorporated into output layer |
Threshold moving [23] | CNN | Algorithm | Decision threshold is adjusted by dividing output probabilities by prior class probabilities |
Category centers [91] | CNN | Algorithm | Class centroids are calculated in deep feature space and K-NN method discriminates |
Very-deep NNs [92] | CNN | Algorithm | CNN network depths of up to 50 layers are used to examine convergence rates |
LMLE [22] | CNN | Hybrid | Triple-header hinge loss and quintuplet sampling generate more discriminative features |
DOS [117] | CNN | Hybrid | Minority class over-sampled in deep feature space using K-NN and micro-cluster loss |
CRL loss [118] | CNN | Hybrid | Class Rectification loss and hard sample mining produce more discriminative features |