Backpropagation: Difference between revisions

From Computer Science Wiki
No edit summary
No edit summary
Line 1: Line 1:
<center>
<blockquote style="padding: 5px; background-color: #FFF8DC; border: solid thin gray;">
  [[File:Exclamation.png]] This is student work which has not yet been approved as correct by the instructor
</blockquote>
</center>
[[file:Studying.png|right|frame|Case study notes<ref>http://www.flaticon.com/</ref>]]
[[file:Studying.png|right|frame|Case study notes<ref>http://www.flaticon.com/</ref>]]


Line 10: Line 4:
Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.<ref>https://en.wikipedia.org/wiki/Backpropagation</ref>
Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.<ref>https://en.wikipedia.org/wiki/Backpropagation</ref>


Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.<ref>https://brilliant.org/wiki/backpropagation/</ref>




== How does it work or a deeper look ==
* If you are discussing a THING YOU CAN TOUCH, you must explain how it works, and the parts it is made of. Google around for an "exploded technical diagram" of your thing, [http://cdiok.com/wp-content/uploads/2012/01/MRI-Technology.jpg maybe like this example of an MRI]  It is likely you will reference outside links. Please attribute your work.
* If you are discussing a PROCESS OR ABSTRACT CONCEPT (like [[fuzzy logic]]) you must deeply explain how it works.
[[file:backpropagation.png|right|frame<ref>http://neuralnetworksanddeeplearning.com/chap5.html<ref>]]
== Examples ==
Please include some example of how your concept is actually used. Your example must include WHERE it is used, and WHAT IS BENEFIT of it being used.
== Pictures, diagrams ==
Pictures and diagrams go a LONG way to helping someone understand a topic. Especially if your topic is a little abstract or complex. Using a picture or diagram is a two part process:
# [https://www.mediawiki.org/wiki/Help:Managing_files upload a file]
# [https://www.mediawiki.org/wiki/Help:Images use the file on a wiki page]
== External links ==
* http://neuralnetworksanddeeplearning.com/chap5.html
* http://neuralnetworksanddeeplearning.com/chap6.html
* http://rimstar.org/science_electronics_projects/backpropagation_neural_network_software_3_layer.htm
* to help fellow students
* Please make sure the content is good
* and don't link to a google search results, please


== References ==
== References ==
Line 44: Line 13:


[[Category:2018 case study]]
[[Category:2018 case study]]
[[Category:Student created article]]

Revision as of 06:47, 22 January 2018

Case study notes[1]

Introduction[edit]

Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.[2]

Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networks.[3]


References[edit]