Skip to main content

Table 1 Comparison of XLNet with Bert and Roberta [28]

From: AraXLNet: pre-trained language model for sentiment analysis of Arabic

Model

Method

Accuracy (%)

F1-Score (SQuAD1.1)a (%)

BERT

Bidirectional transformer with masked language model MLM and next sentence prediction NSP

72.0

90.9

RoBERTa

BERT without NSP

83.2

94.6

XLNet

Bidirectional transformer with permutation-based modeling

85.4

95.1

  1. aSQuAD1.1 is a large-scale dataset contains questions and answer