Backpropagation

From Computer Science Wiki
Jump to: navigation, search
Case studyDevote time and attention to gaining knowledge of (an academic subject), especially by means of books notes[1]

Introduction

Backpropagation is a method to calculateObtain a numerical answer showing the relevant stages in the working. the gradient of the loss function with respect to the weights in an artificial neural networka group or system of interconnected people or things.. It is commonly used as a part of algorithms that optimize the performance of the networka group or system of interconnected people or things. by adjusting the weights, for example in the gradient descent algorithm. It is also called backward propagation of errors.[2]

Backpropagation, short for "backward propagation of errors," is an algorithm for supervised learning of artificial neural networksa group or system of interconnected people or things. using gradient descent. Given an artificial neural networka group or system of interconnected people or things. and an error function, the method calculates the gradient of the error function with respect to the neural networka group or system of interconnected people or things.'s weights. It is a generalization of the delta rule for perceptrons to multilayer feedforward neural networksa group or system of interconnected people or things..[3]

References