Multi-layer perceptron (MLP)

From Computer Science Wiki
Revision as of 21:45, 6 April 2018 by Miko (talk | contribs) (→‎Examples)

Exclamation.png This is student work which has not yet been approved as correct by the instructor

Case study notes[1]

Introduction[edit]

Multi-layer perceptrons are simply a type of neural network consisting of at least 3 nodes. The input nodes, the hidden nodes, and the output nodes. Each node is a neuron that 'activates' and turns on the next node etc.

 <ref> https://en.wikipedia.org/wiki/Multilayer_perceptron</ref>
 

How does it work or a deeper look[edit]

  • Multi-layer perceptrons use backpropagation as part of their learning phase.
  • The nodes use a non-linear activation function (Basically they turn each other on)
  • MLP's are fully connected (each hidden node is connected to each input node etc.)

Pictures, diagrams[edit]

Pictures and diagrams go a LONG way to helping someone understand a topic. Especially if your topic is a little abstract or complex. Using a picture or diagram is a two part process:

  1. upload a file
  2. use the file on a wiki page

External links[edit]

  • It would be helpful
  • to include many links
  • to other internet resources
  • to help fellow students
  • Please make sure the content is good
  • and don't link to a google search results, please

References[edit]