Skip to main content

Table 1 Summary of research on hand classification

From: Comprehensive study of driver behavior monitoring systems using computer vision and machine learning techniques

Refs.

Summary

Methodology

Relevance

[33]

Enhances action recognition using dense trajectories, improving understanding of physical actions in videos.

Dense trajectories

Enhances action recognition models for accurate action detection, relevant to driver behavior analysis in autonomous vehicles.

[34]

Combines temporal and spatial convolution in a new CNN model to learn spatiotemporal features from videos.

Spatiotemporal Multiplier Networks

Proposes Spatiotemporal Multiplier Networks (STMNs) for video data analysis within autonomous vehicle cabins, extracting important features for in-cabin analysis.

[35]

Uses EfficientNet, a highly efficient ConvNet family, to achieve state-of-the-art accuracy through systematic model scaling.

EfficientNet

EfficientNet’s superior accuracy and efficiency are relevant for developing robust vision systems within autonomous vehicle cabins.

[36]

Utilizes MobileNets for efficient and lightweight deep neural networks design, suited for mobile and embedded vision applications.

MobileNets

MobileNets’ efficient, lightweight architecture is suitable for real-time vision systems in autonomous vehicle cabins, minimizing computational resources.

[37]

Combines GoogLeNet and LSTM models to classify self-efficacy levels through human body gesture and movement recognition, achieving high accuracy.

CNN (GoogLeNet) and LSTM

Provides an effective approach to monitor and analyze driver behaviors, enhancing safety and efficiency within autonomous vehicles.

[38]

Uses a pre-trained Keras neural network to classify hand presence in a controlled hand-washing dataset, achieving perfect accuracy.

Neural Network using Keras

Demonstrates an effective approach for hand presence classification, potentially enhancing safety and efficiency by monitoring driver actions in autonomous vehicles.

[39]

Introduces a novel hard attention network for Driver Action Recognition (DAR), effectively recognizing driver behaviors in real-world conditions and reducing computational complexity.

Bidirectional LSTM (Bi-LSTM)

Investigates deep learning for driver behavior monitoring and action recognition, aligning with the goal of in-cabin analysis in autonomous vehicle cabins.

[40]

Uses a multi-camera framework for hand classification in driver monitoring systems, potentially enhancing traffic safety and reducing distracted driving.

RestNet CNN

Discusses a multi-camera framework for hand classification in driver monitoring systems, aligning with the topic of vision systems and machine learning analysis in autonomous vehicle cabins.

[41]

TPresents a CNN-based system for abnormal driving behavior recognition, emphasizing the importance of monitoring and preventing potential accidents caused by distractions.

CNN

Detects abnormal driving behaviors through physiological character classification using deep learning, contributing to understanding of vision systems for driver behavior analysis in autonomous vehicle cabins.