Skip to main content

Table 7 Performance of AMLBID and the baseline tools on the benchmark datasets

From: Using meta-learning for automated algorithms selection and configuration: an experimental framework for industrial big data

Dataset

AMLBID

TPOT

Auto-sklearn(V)

Auto-sklearn(E)

Original paper result

[44]

0.9374

0.9120

0.8215

0.9283

0.85

[45]

0.9706

0.9517

0.9632

0.9356

0.95

[46]

0.9941

0.9907

0.9782

0.99

0.9895

[47]

0.9205

0.9991

0.9357

0.6863

0.9984

[48]

0.8971

0.6711

0.908

0.9723

0.9677

[49]

0.9706

0.7767

0.678

0.9843

0.9278

[50]

0.8967

0.8899

0.6783

0.7952

0.884

[51]

0.8748

0.7826

0.6702

0.7727

0.8659

Wafer-ds

0.8571

0.7312

0.8033

0.8953

HTRU

0.8880

0.8415

0.9027

0.6591

Cnae-9

0.9671

0.8803

0.7922

0.8365

Gas_Sens

0.9739

0.9843

0.9256

0.9468

Covertype

0.8344

0.7307

0.7890

0.6521

Kc1

0.8793

0.7097

0.7697

0.8552

jannis

0.6719

0.7229

0.6171

0.6845

MiniBooNE

0.9645

0.9423

0.8343

0.8903

KDDCup

0.9740

0.8934

0.9331

0.95

-

segment

0.9735

0.9681

0.9337

0.9542

Higgs

0.713

0.726

0.7135

0.729

Credi-g

0.7921

0.7188

0.5739

0.6121

shuttle

0.9649

0.9905

0.8429

0.9362

APS Failure

0.9910

0.9933

0.9716

0.984

nomao

0.9708

0.9570

0.6995

0.7987

CustSat

85.59

0.8276

0.8072

0.8290

kr-vs-kp

0.9976

0.9209

0.6532

0.7593

car

0.9754

0.9999

0.8549

0.9462

albert

0.8759

0.8005

0.8288

0.7981

airlines

0.6982

0.6758

0.7094

0.5927

Numerai28.6

0.5207

0.4229

0.4836

0.4433

  1. The best performances among all AutoML frameworks are highlighted in bold