From: From big data to smart data: a sample gradient descent approach for machine learning
Author and Year | Technique Used | Application | Major contributions | Findings of the study |
---|---|---|---|---|
[20] | Mini-batch Gradient Decent | Image Recognition | Introduced mini-batch GD for CNN’s | Improved convergence rates and training efficiency |
[21] | Adam Optimizer | Deep learning | Proposed adaptive learning rate | Faster convergence in deep learning |
[22] | Nesterov Accelerated Gradient Descent | Optimization algorithms gradient descent for optimization | Introduced Nesterov accelerated | Enhanced convergence compared to traditional gradient descent |
Conjugate gradient decent | Numerical optimization | Investigated CG descent for non-convex optimization | Faster convergence in specific types of optimization problems | |
[24] | Hessian-based Methods | Deep learning | Explored second-order methods using Hessian matrices | Improved optimization for deep learning cost functions |
[25] | Convolutional Neural Networks (CNNs) | Image recognition | Studied CNNs for image recognition and object detection | Effective hierarchical feature learning in image processing |