Neural networks are inspired by the neurons in the brain but do not actually simulate neurons. The human brain is composed of 86 billion nerve cells called neurons. These inputs create electric impulses, which quickly travel through the neural network.
Inputs from sensory organs are accepted by dendrites. A neuron can then send the message to other neuron to handle the issue or does not send it forward.
An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain.
Each connection, like the synapses in a biological brain, can transmit a signal from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it.
Commercial applications of these technologies generally focus on solving complex signal processing or pattern recognition problems.
The neural network consists of the layer of input signals or neurons where the information enters the neural network, a layer of output signal neurons where we can get the result out of the Network, and a number of various hidden layers in between.
Types of ANNs:
1)Feed forward Neural Network
2) Feed Back Neural Network
3) Recurrent Neural Network which is basically used in Long Short Term Memory (LSTM) projects
4) Convolutional Neural Network
5) Radial basis function Neural Network
6) kohonen Self Organizing Neural Network
7) Modular Neural Network
How artificial neural networks work?
A neural network usually involves a large number of processors operating in parallel and arranged in tiers. The first tier receives the raw input information -- analogous to optic nerves in human visual processing. Each successive tier receives the output from the tier preceding it, rather than from the raw input -- in the same way neurons further from the optic nerve receive signals from those closer to it. The last tier produces the output of the system.
Each processing node has its own small sphere of knowledge, including what it has seen and any rules it was originally programmed with or developed for itself. The tiers are highly interconnected, which means each node in tier n will be connected to many nodes in tier n-1-- its inputs -- and in tier n+1, which provides input for those nodes. There may be one or multiple nodes in the output layer, from which the answer it produces can be read.
Neural networks are notable for being adaptive, which means they modify themselves as they learn from initial training and subsequent runs provide more information about the world. The most basic learning model is centred on weighting the input streams, which is how each node weights the importance of input from each of its predecessors. Inputs that contribute to getting right answers are weighted higher.