Skip to main content

Table 5 Result of argument analysis using deep learning model with attention mechanism (Word Embedding from Scratch)

From: Argument annotation and analysis using deep learning with attention mechanism in Bahasa Indonesia

No

Model name

Accuracy (%)

Recall (%)

Precision (%)

F1 macro (%)

ROC-AUC

1

CNN

81.44 ± 2.54

77.33 ± 4.03

80.41 ± 2.73

78.18 ± 3.59

88.81 ± 0.02

2

CNN + Att

72.59 ± 6.41

65.74 ± 6.39

71.35 ± 7.65

65.89 ± 7.57

79.63 ± 0.05

3

LSTM

66.76 ± 1.88

51.14 ± 2.66

45.36 ± 19.95

42.47 ± 4.89

61.32 ± 0.03

4

LSTM + Att

72.71 ± 5.50

61.58 ± 9.63

60.05 ± 22.23

58.29 ± 15.21

79.03 ± 0.09

5

GRU

65.89 ± 1.87

54.21 ± 3.31

54.83 ± 11.50

51.35 ± 6.45

59.38 ± 0.04

6

GRU + Att

77.17 ± 6.30

71.37 ± 12.01

79.84 ± 3.43

69.34 ± 14.66

83.36 ± 0.10

7

BiLSTM

70.06 ± 5.66

62.56 ± 7.69

67.75 ± 6.46

61.14 ± 9.82

72.45 ± 0.06

8

BiLSTM + Att

75.32 ± 4.46

68.23 ± 8.05

74.10 ± 5.11

68.14 ± 8.94

79.65 ± 0.04

9

BiGRU

65.31 ± 2.56

57.45 ± 6.27

59.27 ± 6.03

55.61 ± 6.88

64.29 ± 0.05

10

BiGRU + Att

75.81 ± 5.78

69.52 ± 10.63

79.07 ± 3.94

67.55 ± 13.95

82.14 ± 0.10