Skip to main content

Table 6 Result of argument analysis using deep learning model with attention mechanism (Word Embedding from Scratch)

From: Argument annotation and analysis using deep learning with attention mechanism in Bahasa Indonesia

No

Model name

Accuracy (%)

Recall (%)

Precision (%)

F1 Macro (%)

ROC-AUC

1

CNN

74.35 ± 6.12

66.49 ± 11.45

74.21 ± 9.36

63.84 ± 14.81

80.86 ± 0.05

2

CNN + Att

70.84 ± 3.33

63.02 ± 6.58

71.80 ± 5.80

61.17 ± 9.00

74.06 ± 0.07

3

LSTM

65.79 ± 0.58

51.10 ± 2.04

43.80 ± 13.29

43.54 ± 5.49

56.43 ± 0.06

4

LSTM + Att

69.98 ± 5.46

62.22 ± 8.53

69.14 ± 10.36

61.14 ± 9.51

72.40 ± 0.10

5

GRU

67.15 ± 3.69

53.46 ± 6.54

51.25 ± 14.22

47.48 ± 10.26

57.54 ± 0.09

6

GRU + Att

68.71 ± 4.73

58.89 ± 10.85

53.55 ± 17.62

52.39 ± 15.12

67.69 ± 0.10

7

BiLSTM

67.64 ± 1.56

56.67 ± 3.85

64.01 ± 8.70

54.27 ± 6.46

65.72 ± 0.07

8

BiLSTM + Att

67.54 ± 4.21

57.50 ± 7.11

62.60 ± 16.99

52.86 ± 10.94

73.35 ± 0.09

9

BiGRU

66.96 ± 1.01

54.92 ± 4.01

61.59 ± 4.09

51.28 ± 6.29

61.83 ± 0.05

10

BiGRU + Att

69.10 ± 2.01

61.06 ± 9.05

74.14 ± 6.76

56.28 ± 11.17

71.47 ± 0.10