Skip to main content

Table 13 From [20], bracketed numbers are confidence intervals; note we do not find where Xia et al. document the significance level for the confidence intervals; here “softer” means models are trained with all available features

From: CatBoost for big data: an interdisciplinary review

Softer dataset

Model

Accuracy

AUC

H-measure

LR-softer

0.7516 [0.7508, 0.7523]

0.6151 [0.6139, 0.6163]

0.0843 [0.0827, 0.0860]

RT-softer

0.6952 [0.6911, 0.6996]

0.5444 [0.5391, 0.5493]

0.0124 [0.0095, 0.0153]

BNN-softer

0.7496 [0.7480, 0.7516]

0.6120 [0.6095, 0.6151]

0.0801 [0.0766, 0.0843]

RF-softer

0.7436 [0.7415, 0.7456]

0.6043 [0.6013, 0.6073]

0.0695 [0.0659, 0.0733]

GBDT-softer

0.7504 [0.7488, 0.7520]

0.6132 [0.6107, 0.6158]

0.0818 [0.0784.0.0853]

XGBoost-softer

0.7511 [0.7496, 0.7526]

0.6143 [0.6120, 0.6167]

0.0833 [0.0801, 0.0866]

CatBoost-softer

0.7523 [0.7511, 0.7535]

0.6162 [0.6144, 0.6180]

0.0859 [0.0834, 0.0885]

Model

Type I rate

Type II rate

LR-softer

0.1557 [0.1550, 0.1565]

0.6142 [0.6123, 0.6160]

RT-softer

0.2024 [0.1978, 0.2072]

0.7087 [0.6994, 0.7198]

BNN-softer

0.1569 [0.1557, 0.1580]

0.6190 [0.6141, 0.6231]

RF-softer

0.1617 [0.1599, 0.1639]

0.6298 [0.6241, 0.6346]

GBDT-softer

0.1564 [0.1554, 0.1574]

0.6171 [0.6130, 0.6211]

XGBoost-softer

0.1560 [0.1550, 0.1569]

0.6153 [0.6115, 0.6190]

CatBoost-softer

0.1552 [0.1545, 0.1560]

0.6124 [0.6095, 0.6152]