Experiment 5

Multilayer Perceptron and Application

AIM:

Introduce students to Multilayer Perceptron (MLP) networks and their application.

LABORATORY OUTCOMES:

THEORY:

The Multi-Layer-Perceptron (MLP) was first introduced by M. Minsky and S. Papert in 1969. It is an extended form of the perceptron and incorporates one or more hidden neuron layers between its input and output layers. This extended structure enables the MLP to solve various logical operations, including the XOR problem.

Multilayer Perceptron Network:

Compared to the simple perceptron, the MLP differs in several aspects. Notably, weights are randomly initialized within a certain range (usually [-0.5, 0.5]). Each pattern fed into the network undergoes three passes: forward propagation, backpropagation of error, and weight update.

PROCEDURE:

  1. Select Samples from dropdown and click on the board to plot samples.
  2. Change values in the Parameters section.
  3. Input the number of Hidden Layers in the designated section.
  4. Click on Learn to see how MLP classifies the inputs you supplied.
  5. Click on Init Button to Restart the experiment from the 1st Iteration.
  6. Click on Clear Button to perform the experiment again.

CONCLUSION:

MLP simulator used for the classification of input data points. We observed the effect of changing the number of hidden layers and total number of iterations by keeping constant momentum and learning rate in the MLP network.

REFERENCES:

  1. Rosenblatt, Frank. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961.
  2. Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundations. MIT Press, 1986.
  3. Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems, 2(4), 303-314.
  4. Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation (2 ed.). Prentice Hall. ISBN 0-13-273350-1.
  5. Wasserman, P.D.; Schwartz, T.; Page(s): 10-15; IEEE Expert, 1988, Volume 3, Issue 1. Neural networks. II. What are they and why is everybody so interested in them now?
  6. R. Collobert and S. Bengio (2004). Links between Perceptrons, MLPs and SVMs. Proc. Int'l Conf. on Machine Learning (ICML).