Skip to main content

Table 9 Comparative results for SA of overall accuracy, precision, recall, and F1-scores

From: Social media text analytics of Malayalam–English code-mixed using deep learning

Model

Word embedding method

Precision

Recall

F1-score

Accuracy

CNN

Word2Vec

FastText

0.7518

0.7458

0.7455

0.7329

0.7477

0.7374

0.7455

0.7329

LSTM

Word2Vec

FastText

0.7213

0.7392

0.6995

0.7381

0.7057

0.7372

0.6995

0.7381

GRU

Word2Vec

FastText

0.7641

0.7607

0.7603

0.7633

0.7615

0.7617

0.7603

0.7633

BiLSTM

Word2Vec

FastText

0.7247

0.7374

0.7225

0.7284

0.7226

0.7297

0.7225

0.7284

BiGRU

Word2Vec

FastText

0.7020

0.7395

0.6965

0.7351

0.6961

0.7370

0.6965

0.7351

BiLSTM + CNN

Word2Vec

FastText

0.7171

0.7380

0.6810

0.7396

0.6933

0.7356

0.6810

0.7396

BiGRU + CNN

Word2Vec

FastText

0.7112

0.7416

0.7106

0.7203

0.7080

0.7276

0.7106

0.7203

LSTM + CNN

Word2Vec

FastText

0.7135

0.7215

0.7151

0.7292

0.7124

0.7225

0.7151

0.7292

GRU + CNN

Word2Vec

FastText

0.7207

0.7514

0.7121

0.7255

0.7158

0.7336

0.7121

0.7255

Transformer based classification model

XLM-R

0.7312

0.7299

0.7302

0.7299

  1. The bold numbers represent the highest F1-score obtained for each model