Stochastic gradient descent: Difference between revisions

From Computer Science Wiki
(Created page with "Stochastic gradient descent (SGD) is an optimization algorithm used to find the minimum of a function. It is an iterative algorithm that updates the parameters of a model in t...")
 
No edit summary
 
Line 6: Line 6:


SGD is widely used in machine learning and deep learning, and is particularly well-suited for large-scale problems where the training dataset is too large to be processed all at once. It is also easy to implement and has a number of variants, such as mini-batch SGD, momentum SGD, and Nesterov SGD, that can further improve its convergence properties.
SGD is widely used in machine learning and deep learning, and is particularly well-suited for large-scale problems where the training dataset is too large to be processed all at once. It is also easy to implement and has a number of variants, such as mini-batch SGD, momentum SGD, and Nesterov SGD, that can further improve its convergence properties.
<html>
<iframe width="560" height="315" src="https://www.youtube.com/embed/IU5fuoYBTAM" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</html>

Latest revision as of 08:57, 7 January 2023

Stochastic gradient descent (SGD) is an optimization algorithm used to find the minimum of a function. It is an iterative algorithm that updates the parameters of a model in the direction that reduces the error or loss of the model on the training data.

In SGD, the parameters of the model are updated using the gradient of the loss function with respect to the parameters. The gradient is a vector of partial derivatives that indicates the rate of change of the loss function with respect to each parameter. By moving in the opposite direction of the gradient, the algorithm is able to find the minimum of the loss function and improve the accuracy of the model.

One of the key features of SGD is that it uses a small subset of the training data, called a mini-batch, to compute the gradient at each iteration. This makes it much faster than other optimization algorithms that use the entire training dataset to compute the gradient. It also makes it more robust to noise and outliers in the data, since the impact of any individual data point is averaged over the mini-batch.

SGD is widely used in machine learning and deep learning, and is particularly well-suited for large-scale problems where the training dataset is too large to be processed all at once. It is also easy to implement and has a number of variants, such as mini-batch SGD, momentum SGD, and Nesterov SGD, that can further improve its convergence properties.