Skip to main content

Table 4 Training parameters

From: Emotion analysis of Arabic tweets using deep learning approach

Exper. ID

Embedding

Filter size

Filters nb.

Dropout

Batch size

Hidden dims

Remove stop words

Pre-processing

Acc. (%)

Ex1

300

3, 8

30

0.5, 0.8

64

50

No

No

47.59

Ex2

300

3, 8

30

0.5, 0.8

128

50

No

Yes

54.55

Ex3

300

3, 8

30

0.5, 0.8

128

100

No

Yes

52.05

Ex4

300

3, 8

40

0.5, 0.8

128

100

No

Yes

53.48

Ex5

300

3, 8

50

0.5, 0.8

128

100

No

Yes

51.69

Ex6

300

3, 4, 5

40

0.5, 0.8

128

100

No

Yes

55.26

Ex7

512

3, 4, 5

40

0.5, 0.8

128

100

No

Yes

56.51

Ex8

300

3, 4, 5

40

0.5, 0.8

128

100

Yes

Yes

99.82

  1. where, Embedding is the first layer in a model. It requires that the input data be integer encoded, so that each word is represented by a unique integer, Filter size is the size of the filter used in the experiment, Filter nb is an integer which represents the dimensionality of the output space, Dropout represents applying a technique where randomly selected neurons are ignored during training, Batch size is the number of training examples in one forward/backward pass, Hidden Dims is the number of neurons in this hidden layer, Remove stop words is the elimination of the stop words from the text in this experiment, and Pre-processing is the implementation of the preprocessing steps mentioned in “Data preprocessing” section excluding the step of the stop words elimination