Backpropagation: Difference between revisions
No edit summary |
Mr. MacKenty (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
[[file:Studying.png|right|frame|Case study notes<ref>http://www.flaticon.com/</ref>]] | [[file:Studying.png|right|frame|Case study notes<ref>http://www.flaticon.com/</ref>]] | ||
Line 10: | Line 4: | ||
Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.<ref>https://en.wikipedia.org/wiki/Backpropagation</ref> | Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.<ref>https://en.wikipedia.org/wiki/Backpropagation</ref> | ||
Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.<ref>https://brilliant.org/wiki/backpropagation/</ref> | |||
== References == | == References == | ||
Line 44: | Line 13: | ||
[[Category:2018 case study]] | [[Category:2018 case study]] | ||
Revision as of 05:47, 22 January 2018
Introduction[edit]
Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.[2]
Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.[3]