Neural networks consists of layers of neurons. Just like our human brain.These neurons are the core processing units of the network. First of all We have the input layer, which receives the input. The output layer predicts. Our final output in between exist. The hidden layers, which perform most of Computations required by our Network.
Architecture with an Example:
Assume an image of a circle. This image is composed of 28 by 28 pixels which make up for 784 pixels. Consider each pixels as input to each neuron of the first layer, neurons of one layer are connected to neurons of the next layer through channels. Each of these channels is assigned a numerical value known as weight. The inputs are multiplied to the corresponding weights and their sum is sent as input to the neurons in the hidden layer. Each of these neurons is associated with the numerical value called the bias, which is then added to the input.This value is then passed through a threshold function called the activation function.
The result of the activation function, determine if the particular neuron will get activated or not and activated neuron transmits data to the neurons of the next layer over the channels.
This manner, the date has propagated through the network. This is called forward propagation in the output layer. The neuron with a highest value fires, and determines, the output values are basically a probability. For example, here are neuron associated, with square, has the highest probability,hence that's the output predicted by the neural network.
We know our neural network has made a wrong prediction, but how does the network figure this out? Note that our network is yet to be trained during this training process along with the input. Our Network, also at the output fed to it. The predicted output is compared against the actual output to realize the error in prediction, the magnitude of the error indicates, how wrong we are in the sign. The arrows here, give an indication of the direction and magnitude of change.To reduce the error,this information is then transferred backward through our Network. This is known as backpropagation now. Based on this information, the weight, or adjusted this cycle of forward, propagation in backpropagation is iteratively performed with multiple inputs. This process continues until our weights are assigned. Such that the network can predict the shapes correctly.
Time for Neural Network
You might wonder how long the training process takes honestly? Neural networks may take hours or even months to train, but time is a reasonable trade-off when compared to its go. Let us look at some of the Prime application of neural networks, facial recognition cameras on smart phones. These days can estimate the age of the person based on their facial features. This is neural networks at play. Neural networks are trained to understand the patterns and detect a possibility of fall or rise of stock prices with high accuracy. Neural networks can even learn patterns in music and train itself enough to compose a fresh tune.
The Future of Neural Network :
We are still taking baby steps. The growth in this field has been foreseen by the big name.Companies such as Google Amazon, and Nvidia have invested in developing products, such as libraries predictive models and intuitive gpus that support, the implementation of neural networks. The question dividing The Visionaries is on the reach of neural networks to what extend?
Can we replicate the human brain? We have to wait a few more years to give a definite answer.