Introduction To Neural Networks
Data Science

Introduction To Neural Networks

The usual programming we know; It is the process of giving a computer or machine a set of steps or tasks that it must perform specifically in order to complete a task, and this is called the algorithm .But scientists are not satisfied with this. They want to take the software to another level, in which they can understand and make decisions on their own with minimal or no human intervention. Here comes the role of neural networks and the smart algorithms that depend on them.

What are Artificial Neural Networks?

Neural networks, also known as artificial neural networks (ANN) or simulated neural networks (SNNs), are a subset of machine learning and lie at the heart of deep learning algorithms. Its name and structure are inspired by the human brain, mimicking the way biological neurons signal each other.

 

Network Layers :

The most common type of artificial neural network consists of three groups, or layers, of units: a layer of "input" units connected to a layer of "hidden" units, which is connected to a layer of "output" units.


Input units: - The activity of the input units represents the raw information that is entered into the network. This is also called the input layer.

Hidden Units: - The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input units and the hidden units. This is also called the hidden layer.

Output Units: - The behaviour of the output units depends on the activity of the hidden units and the weights between the hidden units and the output units. This is also called the output layer.

 

How a single neural work?



As we can see in the Artificial Neuron in the image, it is also done by receiving several inputs (symbol X in the image), which are handled and processed by weights (symbol W in the image).

Then its values ​​are processed using a mathematical function called the sigmoid function (symbol σ in the image) to output a value between one and zero, this value constitutes the Outputs of the cell (symbol Y in the image).

Through an assemblage of tens, hundreds, and even thousands of these artificial neurons, what we know as Artificial Neural Networks (ANN) is formed.

 

The Two Main parts of NN:

    1. Feed Forward Propagation/Forward Propagation.

    2.Backward Propagation/Backpropagation.


Forward Propagation:

   - As the name implies, the input data is fed in the forward direction over the network. Each hidden layer accepts and processes the input data according to the activation function and passes it to the next layer.

   - In order to generate some output, the input data has to be fed only in the forward direction. The data must not flow in the reverse direction while the output is being generated or else it will form a cycle and the output can never be generated. These network configurations are known as a feed-forward network. The feed forward grille assists in forward propagation.

 

   - In each neuron in the hidden or output layer, processing occurs in two steps:

 

·        Pre-activation: It is a weighted sum of the inputs, i.e. the linear conversion of weights to the available inputs. Based on this aggregate and the activation function, the neuron makes a decision whether or not to pass this information further.

·        Activate: The computed weighted sum of inputs is passed to the activation function. The activation function is a mathematical function that adds nonlinearity to the network. There are four commonly used and commonly used activation functions - sigmoid, hyperbolic tangent (tanh), ReLU and Softmax.

 

Backpropagation :

   - Backpropagation is the core of neural network training. It is the practice of fine-tuning the weights of the neural network based on the error rate obtained in the previous epoch (Correct adjustment of weights ensures lower error rates, which makes the model reliable by increasing its generalizability.

   - Our model does not give accurate predictions due to the fact that its weights have not yet been adjusted. We have a loss too. Reverse multiplication is about feeding this loss inversely in such a way that we can adjust the weights based on any of them. An optimization function like gradient descent will help us find the weights that - hopefully - result in less loss in the next iteration.

The overall steps are:

·        In the forward propagate stage, the data flows through the network to get the outputs.

·        The loss function is used to calculate the total error.

·        Then, we use backward propagation algorithm to calculate the gradient of the loss function with respect to each weight and bias.

·        Finally, we use gradient descent to update the weights and biases at each layer

·        We repeat above steps to minimize the total error of the neural network.

  • Mohamed Eslam
  • Mar, 28 2022

Add New Comments

Please login in order to make a comment.

Recent Comments

Be the first to start engaging with the bis blog.