Skip to main content

Table 2 The description of the machine learning methods adopted in this paper, \(({\textbf{X}}, {\textbf{Y}})\) means the dataset in pair; \(\beta\) the parameters of the models, \(\lambda\) the hyperparameter

From: A comparison of machine learning methods for ozone pollution prediction

Model

Formulation

Kernel

Linear Regression

\({\textbf{Y}} = {\textbf{X}} \beta\)

None

Linear Regression with l2 regularizer

\({\textbf{Y}} = {\textbf{X}} \beta + \lambda \Vert \beta \Vert ^2\)

None

Lasso

\({\textbf{Y}} = {\textbf{X}} \beta + \lambda \Vert \beta \Vert ^1\)

None

Partial Linear Least Square Regression

Regression of decomposed \({\textbf{Y}}\) and \({\textbf{X}}\)

None

GRP (Exponential kernel)

\({\textbf{Y}} \sim {\mathcal {N}} ({\textbf{X}} \beta , \Sigma )\)

Eq. 10

GRP (DotProd kernel)

\({\textbf{Y}} \sim {\mathcal {N}} ({\textbf{X}} \beta , \Sigma )\)

Eq. 13

GRP (Matérn kernel)

\({\textbf{Y}} \sim {\mathcal {N}} ({\textbf{X}} \beta , \Sigma )\)

Eq. 12

SVR (linear kernel)

Eq. 3

Eq. 14

SVR (Polynomial kernel)

Eq. 3

Eq. 15

SVR (Radial basis kernel)

Eq. 3

Eq. 11

SVR (Sigmoid kenerl)

Eq. 3

Eq. 16

MLP_1

Layer shape [10, 5, 1]

None

MLP_2

Layer shape [10, 5, 2, 1]

None

RF

Depth 7, Criterion: squared error

None

Bagging

10 decision trees as base learner

None

GBoost

Criterion: Friedman MSE

None

AdaBoost

50 decision trees as base learner

None

HistGBoost

Criterion: squared error

None

LightGBM

31 leaves and Criterion: squared error

None