Skip to main content

Table 7 Average results by learner and evaluation metric for a subset of papers, experimental/novel methods vs. baseline methods

From: The use of generative adversarial networks to alleviate class imbalance in tabular data: a survey

 

AUC

Balanced Accuracy

F1 Score

Precision

Ranking

Recall

novel > baseline

baseline > novel

CNN

      

4

0

 Novel

 

0.6767

0.9947

0.9936

 

0.9958

  

 Baseline

 

0.6187

0.9783

0.9792

 

0.9805

  

DT

      

2

0

 Novel

  

0.9552

 

3.3370

   

 Baseline

  

0.9482

 

4.0549

   

GBC

      

1

0

 Novel

    

2.4850

   

 Baseline

    

4.2097

   

KNN

      

1

1

 Novel

  

0.2600

 

2.8140

   

 Baseline

  

0.2736

 

4.1378

   

LR

      

1

0

 Novel

    

3.6300

   

 Baseline

    

3.9697

   

MLP

      

3

1

 Novel

 

0.9937

0.9935

0.9890

 

0.9637

  

 Baseline

 

0.9723

0.9703

0.9725

 

0.9814

  

NB

      

0

1

 Novel

  

0.1800

     

 Baseline

  

0.4279

     

RF

      

2

0

 Novel

  

0.9543

 

2.5857

   

 Baseline

  

0.9485

 

4.0488

   

SAE

      

4

0

 Novel

 

0.9951

0.9947

0.9936

 

0.9958

  

 Baseline

 

0.9805

0.9783

0.9792

 

0.9805

  

SVM

      

1

2

 Novel

0.7128

 

0.4275

 

2.4633

   

 Baseline

0.9412

 

0.5231

 

4.2556

   

novel > baseline

0

3

5

3

6

2

19

 

baseline > novel

1

0

3

0

0

1

 

5

  1. Experimental methods have a "better" evaluation metric in 19 of 24 aggregation instances