Stochastic gradient descent

From Computer Science Wiki
Revision as of 08:34, 27 December 2022 by Bmackenty (talk | contribs) (Created page with "Stochastic gradient descent (SGD) is an optimization algorithm used to find the minimum of a function. It is an iterative algorithm that updates the parameters of a model in t...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Stochastic gradient descent (SGD) is an optimization algorithm used to find the minimum of a function. It is an iterative algorithm that updates the parameters of a model in the direction that reduces the error or loss of the model on the training data.

In SGD, the parameters of the model are updated using the gradient of the loss function with respect to the parameters. The gradient is a vector of partial derivatives that indicates the rate of change of the loss function with respect to each parameter. By moving in the opposite direction of the gradient, the algorithm is able to find the minimum of the loss function and improve the accuracy of the model.

One of the key features of SGD is that it uses a small subset of the training data, called a mini-batch, to compute the gradient at each iteration. This makes it much faster than other optimization algorithms that use the entire training dataset to compute the gradient. It also makes it more robust to noise and outliers in the data, since the impact of any individual data point is averaged over the mini-batch.

SGD is widely used in machine learning and deep learning, and is particularly well-suited for large-scale problems where the training dataset is too large to be processed all at once. It is also easy to implement and has a number of variants, such as mini-batch SGD, momentum SGD, and Nesterov SGD, that can further improve its convergence properties.