Skip to main content

Table 6 Comparison results of overall accuracy, precision, recall, and F1-score for OLI

From: Social media text analytics of Malayalam–English code-mixed using deep learning

Model

Word embedding method

Precision

Recall

F1-score

Accuracy

CNN

Word2Vec

FastText

0.9901

0.9834

0.9895

0.9810

0.9896

0.9816

0.9895

0.9810

LSTM

Word2Vec

FastText

0.9930

0.9905

0.9930

0.9900

0.9929

0.9901

0.9930

0.9900

GRU

Word2Vec

FastText

0.9965

0.9898

0.9965

0.9890

0.9964

0.9891

0.9965

0.9890

BiLSTM

Word2Vec

FastText

0.9937

0.9965

0.9935

0.9965

0.9935

0.9964

0.9935

0.9965

BiGRU

Word2Vec

FastText

0.9965

0.9970

0.9965

0.9970

0.9964

0.9969

0.9965

0.9970

BiLSTM + CNN

Word2Vec

FastText

0.9960

0.9969

0.9960

0.9970

0.9959

0.9969

0.9960

0.9970

BiGRU + CNN

Word2Vec

FastText

0.9960

0.9852

0.9960

0.9835

0.9959

0.9839

0.9960

0.9835

LSTM + CNN

Word2Vec

FastText

0.9950

0.9964

0.9950

0.9965

0.9949

0.9964

0.9950

0.9965

GRU + CNN

Word2Vec

FastText

0.99701

0.9955

0.9970

0.9955

0.9969

0.9954

0.9970

0.9955

Transformer based classification model

XLM-R

0.9904

0.9900

0.9901

0.9900

  1. The numbers in bold represent the highest F1-score obtained for each model