Skip to main content

Table 3 Comparison of FedFreezeBERT with other baseline approaches on ArSarcasm-v2 test set

From: Federated Freeze BERT for text classification

Baseline Methods and FedFreezeBERT with OrdinaryAggregator

Method

BERT Frozen?

\(F_1^{PN}\)

Central Training

No

73.48

FedAvg

No

66.67

FedProx

No

66.36

FedSplitBERT

No

74.39

D-FedFreezeBERT (FedAvg)

Yes

49.39

D-FedFreezeBERT (FedProx)

Yes

47.5

C-FedFreezeBERT

Yes

65.1

\(F_1^{PN}\) Using Advanced Aggregation Architectures

Method

BERT Frozen?

AverageAggregator

P-SUM

H-SUM

Central Training

No

73.00

74.63

73.93

FedAvg

No

63.84

71.71

68.81

FedProx

No

63.55

71.56

68.54

FedSplitBERT

No

74.08

74.57

74.88

D-FedFreezeBERT (FedAvg)

Yes

54.23

74.18

74.29

D-FedFreezeBERT (FedProx)

Yes

52.09

74.13

74.21

C-FedFreezeBERT

Yes

72.34

75.26

74.94

  1. Bold values indicate the best-performing approach when the aggregation architecture is fixed