Skip to main content

Table 2 The major parameters of the classification algorithms

From: An adaptive hybrid african vultures-aquila optimizer with Xgb-Tree algorithm for fake news detection

Classification algorithm

Parameters

Xgb-Tree

Number of boosting iterations \(nrounds = 100\)

Maximum depth of a tree \(max\_depth = 3\)

Minimum loss reduction \(gamma = 0\)

Minimum sum of instance weight \(min\_child\_weight = 1\)

Step size shrinkage (learning rate) \(eta = 0.4\)

Sub-sample ratio of columns \(colsample\_bytree = 0.8\)

Sub-sample ratio of training \(sub\_sample = 0.75\)

DT

Maximum depth of a tree \(max\_depth = 5\)

Number of features \(max\_features = 1\)

k-NN

Euclidean distance metric \(k=5\)

SVM

Regularization parameter \(C=1\)

Degree of polynomial kernel \(degree=2\)

RF

Number of trees in a forest \(n\_estimators=10\)

Maximum depth of a tree \(max\_depth = 5\)

Number of features \(max\_features = 1\)

MLP

Number of neurons in the \(i^{th}\) hidden layer \(hidden\_layer\_sizes = (1000,500,100)\)

Strength of the L2 regularization term \(alpha = 0.001\)

Maximum number of iterations \(max\_iter = 1000\)