Skip to main content

Table 7 Proposed model compared with other state-of-the-art models

From: Deep learning for emotion analysis in Arabic tweets

Model Preprocessing Features Classification algorithm Validation accuracy Test accuracy Micro F1 Macro F1
Proposed Model Normalization + a manual emoji lexicon + ARLSTEM AraVec [18] Bidirectional LSTM 0.575 0. 498 0.615 0.440
EMA Normalization, a manual emoji lexicon + ARLSTEM AraVec SVC 0.488 0.489 0.618 0.461
TW-Star Emo + Stem + stop TF-IDF SVM NA 0.465 0.597 0.446
UNCC Tokenization white spaces removal AraVec + Affective Tweets features a fully connected neural network NA 0.446 0.572 0.447
SVM-Unigrams NA Unigrams SVM NA 0.38 0.516 0.384
Amrita NA Doc2Vec RF NA 0.254 0.379 0.25