Skip to main content

Table 4 Comparison between the proposed model and the counterpart models

From: Sentiment analysis of Indonesian datasets based on a hybrid deep-learning strategy

Model

Accuracy*

Precision*

Recall*

F-score*

AUC*

BERT + LSTM-CNN-SVM

75.50

73.73

77.10

74.63

76.33

BERT and feature union

82.60

82.23

82.83

82.50

82.67

BERT + LSTM-CNN fusion model

83.50

82.23

84.83

83.17

83.77

BERT-large + BiGRU

84.60

86.23

83.60

84.83

84.77

RoBERTa-LSTM

83.30

82.17

84.10

83.07

83.33

RoBERTa-CNN

83.00

82.77

83.37

82.93

83.23

Roberta + BiGRU-ATT

82.87

81.40

84.03

82.57

83.03

SBERT-MLP

80.83

82.53

78.60

80.30

81.17

Proposed model

85.13

85.41

84.94

85.17

85.14

  1. *Metric values are expressed as percentages