Backpropagation: Difference between revisions

From Computer Science Wiki
No edit summary
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<center>
<blockquote style="padding: 5px; background-color: #FFF8DC; border: solid thin gray;">
  [[File:Exclamation.png]] This is student work which has not yet been approved as correct by the instructor
</blockquote>
</center>
[[file:Studying.png|right|frame|Case study notes<ref>http://www.flaticon.com/</ref>]]
[[file:Studying.png|right|frame|Case study notes<ref>http://www.flaticon.com/</ref>]]


== Introduction ==
Backpropagation, or "backward propagation of errors," is a method used in artificial neural networks to calculate the gradient that is needed in the optimization process. It is used for training the network and is a key part of many machine learning algorithms. Here's a step-by-step description:
Backpropagation is a method to calculate the gradient of the loss function with respect to the weights in an artificial neural network. It is commonly used as a part of algorithms that optimize the performance of the network by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.<ref>https://en.wikipedia.org/wiki/Backpropagation</ref>
 
<nowiki>
<ref> the url I cited by material from </ref>
</nowiki>
 
== How does it work or a deeper look ==
 
* If you are discussing a THING YOU CAN TOUCH, you must explain how it works, and the parts it is made of. Google around for an "exploded technical diagram" of your thing, [http://cdiok.com/wp-content/uploads/2012/01/MRI-Technology.jpg maybe like this example of an MRI]  It is likely you will reference outside links. Please attribute your work.
* If you are discussing a PROCESS OR ABSTRACT CONCEPT (like [[fuzzy logic]]) you must deeply explain how it works.
 
== Examples ==
 
Please include some example of how your concept is actually used. Your example must include WHERE it is used, and WHAT IS BENEFIT of it being used.
 
== Pictures, diagrams ==


Pictures and diagrams go a LONG way to helping someone understand a topic. Especially if your topic is a little abstract or complex. Using a picture or diagram is a two part process:
# Forward Pass: During this step, the network makes a prediction based on the input data. This prediction will initially be fairly inaccurate, as the network's weights are initialized randomly.
# Calculate Loss: The prediction is compared to the actual output, and the difference between the two is calculated. This difference is called the "loss" or "error." There are various methods to calculate this loss, but all aim to represent how far off the network's prediction was from the actual output.
# Backward Pass (Backpropagation): This is where backpropagation really comes into play. The network propagates the error backwards, starting from the output layer and moving through each hidden layer until it reaches the input layer. The goal is to calculate the gradient, or the rate of change of the error with respect to the weights and biases in the network. To do this, it uses the chain rule from calculus to iteratively compute these gradients for each layer.
# Update Weights: The final step is to use these gradients to adjust the weights and biases in the network. This is typically done using an optimization algorithm, such as stochastic gradient descent (SGD), which adjusts the weights in the opposite direction of the gradient to minimize the loss. The size of the adjustments is governed by a parameter called the "learning rate."


# [https://www.mediawiki.org/wiki/Help:Managing_files upload a file]
The above steps are repeated for a number of iterations (or "epochs") until the network is adequately trained. The end goal is to adjust the weights and biases of the network so as to minimize the error on the output, and in doing so, the network "learns" the relationship between the input data and the output data.
# [https://www.mediawiki.org/wiki/Help:Images use the file on a wiki page]


== External links ==


* http://neuralnetworksanddeeplearning.com/chap5.html
* http://neuralnetworksanddeeplearning.com/chap6.html
* http://rimstar.org/science_electronics_projects/backpropagation_neural_network_software_3_layer.htm
* to help fellow students
* Please make sure the content is good
* and don't link to a google search results, please


== References ==
== References ==
Line 44: Line 17:


[[Category:2018 case study]]
[[Category:2018 case study]]
[[Category:Student created article]]

Latest revision as of 08:30, 19 May 2023

Case study notes[1]

Backpropagation, or "backward propagation of errors," is a method used in artificial neural networks to calculate the gradient that is needed in the optimization process. It is used for training the network and is a key part of many machine learning algorithms. Here's a step-by-step description:

  1. Forward Pass: During this step, the network makes a prediction based on the input data. This prediction will initially be fairly inaccurate, as the network's weights are initialized randomly.
  2. Calculate Loss: The prediction is compared to the actual output, and the difference between the two is calculated. This difference is called the "loss" or "error." There are various methods to calculate this loss, but all aim to represent how far off the network's prediction was from the actual output.
  3. Backward Pass (Backpropagation): This is where backpropagation really comes into play. The network propagates the error backwards, starting from the output layer and moving through each hidden layer until it reaches the input layer. The goal is to calculate the gradient, or the rate of change of the error with respect to the weights and biases in the network. To do this, it uses the chain rule from calculus to iteratively compute these gradients for each layer.
  4. Update Weights: The final step is to use these gradients to adjust the weights and biases in the network. This is typically done using an optimization algorithm, such as stochastic gradient descent (SGD), which adjusts the weights in the opposite direction of the gradient to minimize the loss. The size of the adjustments is governed by a parameter called the "learning rate."

The above steps are repeated for a number of iterations (or "epochs") until the network is adequately trained. The end goal is to adjust the weights and biases of the network so as to minimize the error on the output, and in doing so, the network "learns" the relationship between the input data and the output data.


References[edit]