Skip to main content

Table 3 Comparison of classification accuracy and Macro-F1 for ALSC

From: Aspect-level sentiment classification with fused local and global context

Methods

Restaurant

Laptop

Twitter

MAMS

Accuracy

Macro-F1

Accuracy

Macro-F1

Accuracy

Macro-F1

Accuracy

Macro-F1

TD-LSTM

75.63

68.13

70.80

69.00

IAN

78.60

72.10

MemNet

78.16

65.83

70.33

64.09

RAM

80.23

70.80

74.49

71.35

69.36

67.30

BERT

82.86

74.87

77.12

72.55

74.42

72.67

81.96

81.28

BERT-SPC

84.46

76.98

78.99

75.03

73.55

72.14

82.82

81.90

BERT-PT

84.95

76.96

78.07

75.08

BAT

86.03

79.24

79.35

76.50

T-GCN + BERT

86.16

79.95

80.88

77.03

76.45

75.25

83.38

82.77

dotGCN + BERT

86.16

80.49

81.03

78.10

78.11

77.00

PConvBERT (ours)

86.96

80.87

81.66

78.33

76.73

75.82

84.36

83.95

RoBERTa

87.23

80.20

81.19

77.69

74.58

72.75

84.06

83.45

PConvRoBERTa (ours)

89.29

84.27

83.54

80.89

78.47

77.53

85.55

85.05

  1. All results of baseline models, except BERT and RoBERTa, are retrieved from original publication. “–” means not reported