Theory of Artificial Neural Network
History of Artificial Neural Network
The history of ANN can be divided into three areas:
During 1940s to 1960s:
- 1943: Warren McCulloch and Walter Pitts modeled a simple neural network using electrical circuits.
- 1949: Donald Hebb's book, "The Organization of Behavior," introduced the concept of neural plasticity.
- 1956: Taylor introduced an associative memory network.
- 1958: Frank Rosenblatt invented the Perceptron learning algorithm.
- 1960: Bernard Widrow and Marcian Hoff developed models called "ADALINE" and "MADALINE."
During 1960s to 1980s:
- 1961: Rosenblatt proposed the backpropagation scheme for multilayer networks.
- 1969: Minsky and Papert invented the multilayer perceptron (MLP).
- 1971: Kohonen developed associative memories.
- 1976: Grossberg and Carpenter developed Adaptive Resonance Theory.
ANN from 1980s till Present:
- 1982: Hopfield introduced the Energy approach.
- 1985: Ackley, Hinton, and Sejnowski developed the Boltzmann machine.
- 1986: Rumelhart, Hinton, and Williams introduced the Generalised Delta Rule.
- 1988: Kosko developed Binary Associative Memory (BAM) and introduced the concept of Fuzzy Logic in ANN.
Biological Neuron
Neural networks are inspired by our brains. A biological neural network describes a population of physically interconnected neurons or a group of disparate neurons whose inputs or signaling targets define a recognizable circuit.
Communication between neurons often involves an electrochemical process. The interface through which they interact with surrounding neurons usually consists of several dendrites (input connections), which are connected via synapses to other neurons, and one axon (output connection).
The brain works in both a parallel and serial way, readily apparent from the physical anatomy of the nervous system.
Artificial Neural Network (ANN)
An artificial neural network is a system based on the operation of biological neural networks, emulating the functions of the brain. ANN can perform tasks that linear programs cannot and learns from data without needing to be reprogrammed.
Advantages of ANN include its ability to operate in parallel, continuous learning, and versatility in implementation. However, ANN requires training to operate and may have higher processing time for large networks.
Neural Network Topologies
Neural networks can have different topologies:
- Feed-forward neural networks: Data flow strictly from input to output units with no feedback connections.
- Recurrent neural networks: Contain feedback connections, and their dynamical properties are important.
Training of Artificial Neural Networks
A neural network can be trained using supervised learning, unsupervised learning, or reinforcement learning:
- Supervised learning: The network is trained with input-output pairs provided by an external teacher or the system.
- Unsupervised learning: The network discovers statistically salient features of the input population without predefined categories.
- Reinforcement learning: The learning machine interacts with the environment and adjusts its parameters based on feedback responses.
Conclusion
In conclusion, artificial neural networks have a rich history and are inspired by biological neurons. They offer advantages such as parallel processing, continuous learning, and versatility. Different topologies and training methods enable various applications of neural networks in real-world systems.
Text/Reference Books:
Web Addresses (URLs):