Multi-layer perceptron (MLP): Difference between revisions

From Computer Science Wiki
Line 20: Line 20:
* The nodes use a non-linear activation function (Basically they turn each other on)
* The nodes use a non-linear activation function (Basically they turn each other on)
* MLP's are fully connected (each hidden node is connected to each input node etc.)
* MLP's are fully connected (each hidden node is connected to each input node etc.)
== Examples ==
Please include some example of how your concept is actually used. Your example must include WHERE it is used, and WHAT IS BENEFIT of it being used.


== Pictures, diagrams ==
== Pictures, diagrams ==

Revision as of 20:45, 6 April 2018

Exclamation.png This is student work which has not yet been approved as correct by the instructor

Case study notes[1]

Introduction[edit]

Multi-layer perceptrons are simply a type of neural network consisting of at least 3 nodes. The input nodes, the hidden nodes, and the output nodes. Each node is a neuron that 'activates' and turns on the next node etc.

 <ref> https://en.wikipedia.org/wiki/Multilayer_perceptron</ref>
 

How does it work or a deeper look[edit]

  • Multi-layer perceptrons use backpropagation as part of their learning phase.
  • The nodes use a non-linear activation function (Basically they turn each other on)
  • MLP's are fully connected (each hidden node is connected to each input node etc.)

Pictures, diagrams[edit]

Pictures and diagrams go a LONG way to helping someone understand a topic. Especially if your topic is a little abstract or complex. Using a picture or diagram is a two part process:

  1. upload a file
  2. use the file on a wiki page

External links[edit]

  • It would be helpful
  • to include many links
  • to other internet resources
  • to help fellow students
  • Please make sure the content is good
  • and don't link to a google search results, please

References[edit]