Skip to main content

Table 4 Accuracy classification results of the recommended pipelines for the considered AeKNN architectures

From: Autoencoder-kNN meta-model based data characterization approach for an automated selection of AI algorithms

Dataset

AeKNN

(32)

(16)

(8)

(32,16,32)

(32,16,8,16,32)

APSFailure

0.9921

0.9734

0.86475

0.9033

0.8325

Higgs

0.7283

0.6911

0.4872

0.6398

0.5316

CustSat

0.8155

0.7826

0.5318

0.8559

0.6943

car

0.9999

0.9808

0.7049

0.9203

0.8277

kr-vs-kp

0.9976

0.8130

0.6532

0.7330

0.7291

airlines

0.6982

0.6833

0.5627

0.7167

0.4334

vehicle

0.8880

0.8934

0.3591

0.8004

0.4098

MiniBooNE

0.9645

0.9217

0.8143

0.85

0.7436

jannis

0.7229

0.6843

0.6371

0.6911

0.6608

nomao

0.9708

0.9719

0.5395

0.6994

0.4659

Credi-g

0.7921

0.6502

0.5121

0.3871

0.4768

Kc1

0.8793

0.8754

0.3597

0.7488

0.5691

Cnae-9

0.9671

0.8923

0.5622

0.5208

0.6049

albert

0.8759

0.8131

0.6981

0.8439

0.9053

Numerai28.6

0.5207

0.4530

0.3029

0.4760

0.2810

segment

0.9735

0.9622

0.8837

0.9508

0.5791

Covertype

0.8344

0.7189

0.6521

0.6305

0.4620

KDDCup

0.9740

0.8514

0.8034

0.8821

0.8572

shuttle

0.9362

0.9997

0.6429

0.8576

0.6744

Gas_Sens-uci

0.9843

0.9755

0.7256

0.9667

0.7032

  1. The best performances among all architectures are highlighted in bold