Multi-layer perceptron (MLP): Difference between revisions

From Computer Science Wiki
(Created page with "<center> <blockquote style="padding: 5px; background-color: #FFF8DC; border: solid thin gray;"> File:Exclamation.png This is student work which has not yet been approve...")
 
 
(4 intermediate revisions by the same user not shown)
Line 9: Line 9:
== Introduction ==
== Introduction ==


Please write a clear, concise description of your topic here.You will likely reference your introduction from somewhere else. Please use the following syntax at the end of each of your ideas. '''IT IS CRITICAL YOU ATTRIBUTE''' others work. Your introduction should be factual. No more than 3 or 4 sentences, please. Because you are not an expert in your topic, I expect you to triangulate your information. LOTS OF LINK TO OTHER RESOURCES PLEASE!
Multi-layer perceptrons are simply a type of neural network consisting of at least 3 nodes. The input nodes, the hidden nodes, and the output nodes. Each node is a neuron that 'activates' and turns on the next node etc.
 
  <nowiki>
  <nowiki>
  <ref> the url I cited by material from </ref>
  <ref> https://en.wikipedia.org/wiki/Multilayer_perceptron</ref>
  </nowiki>
  </nowiki>


== How does it work or a deeper look ==
== How does it work or a deeper look ==


* If you are discussing a THING YOU CAN TOUCH, you must explain how it works, and the parts it is made of. Google around for an "exploded technical diagram" of your thing, [http://cdiok.com/wp-content/uploads/2012/01/MRI-Technology.jpg maybe like this example of an MRI]  It is likely you will reference outside links. Please attribute your work.
* Multi-layer perceptrons use backpropagation as part of their learning phase.
* If you are discussing a PROCESS OR ABSTRACT CONCEPT (like [[fuzzy logic]]) you must deeply explain how it works.
* The nodes use a non-linear activation function (Basically they turn each other on)
 
* MLP's are fully connected (each hidden node is connected to each input node etc.)
== Examples ==
 
Please include some example of how your concept is actually used. Your example must include WHERE it is used, and WHAT IS BENEFIT of it being used.


== Pictures, diagrams ==


Pictures and diagrams go a LONG way to helping someone understand a topic. Especially if your topic is a little abstract or complex. Using a picture or diagram is a two part process:


# [https://www.mediawiki.org/wiki/Help:Managing_files upload a file]
[[File:Mlp-network.png|thumb]]
# [https://www.mediawiki.org/wiki/Help:Images use the file on a wiki page]


== External links ==
== External links ==

Latest revision as of 21:47, 6 April 2018

Exclamation.png This is student work which has not yet been approved as correct by the instructor

Case study notes[1]

Introduction[edit]

Multi-layer perceptrons are simply a type of neural network consisting of at least 3 nodes. The input nodes, the hidden nodes, and the output nodes. Each node is a neuron that 'activates' and turns on the next node etc.

 <ref> https://en.wikipedia.org/wiki/Multilayer_perceptron</ref>
 

How does it work or a deeper look[edit]

  • Multi-layer perceptrons use backpropagation as part of their learning phase.
  • The nodes use a non-linear activation function (Basically they turn each other on)
  • MLP's are fully connected (each hidden node is connected to each input node etc.)


Mlp-network.png

External links[edit]

  • It would be helpful
  • to include many links
  • to other internet resources
  • to help fellow students
  • Please make sure the content is good
  • and don't link to a google search results, please

References[edit]