Skip to main content

Neural Network - Introduction


In neuroscience, a neural network is a series of interconnected neurons whose activation defines a recognizable linear pathway. The interface through which neurons interact with their neighbors usually consists of several axon terminals connected via synapses to dendrites on other neurons. If the sum of the input signals into one neuron surpasses a certain threshold, the neuron sends an action potential (AP) at the axon hillock and transmits this electrical signal along the axon.

But in terms of AI, neural network refers to Artificial Neural Network.

So, what is Artificial Neural Network 

The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:
a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.
In Neural Network Primer: Part I by Maureen Caudill, AI Expert, Feb. 1989
An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. Information that flows through the network affects the structure of the ANN because a neural network changes - or learns, in a sense - based on that input and output.

ANNs are considered nonlinear statistical data modeling tools where the complex relationships between inputs and outputs are modeled or patterns are found.

Basics of Artificial Neural Network

Neural networks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer' where the answer is output as shown in the graphic below.


One of the most important feature of ANN is its ability to learn. It changes it internal structure based on the information flowing through it, i.e., changing its internal weights. A weight is something thats controls information flow between two neurons in a neural network.

Most ANNs contain some form of 'learning rule' which modifies the weights of the connections according to the input patterns that it is presented with. In a sense, ANNs learn by example as do their biological counterparts; a child learns to recognize dogs from examples of dogs.

Although there are many different kinds of learning rules used by neural networks, this demonstration is concerned only with one; the delta rule. The delta rule is often utilized by the most common class of ANNs called backpropagational neural networks (BPNNs). Backpropagation is an abbreviation for the backwards propagation of error.

 Uses of Artificial Neural Network

  • Pattern Recognition — We’ve mentioned this several times already and it’s probably the most common application. Examples are facial recognition, optical character recognition, etc.

  • Time Series Prediction — Neural networks can be used to make predictions. Will the stock rise or fall tomorrow? Will it rain or be sunny?

  • Signal Processing — Cochlear implants and hearing aids need to filter out unnecessary noise and amplify the important sounds. Neural networks can be trained to process an audio signal and filter it appropriately.

  • Control — You may have read about recent research advances in self-driving cars. Neural networks are often used to manage steering decisions of physical vehicles (or simulated ones).

  • Anomaly Detection —Because neural networks are so good at recognizing patterns, they can also be trained to generate an output when something occurs that doesn’t fit the pattern. Think of a neural network monitoring your daily routine over a long period of time. After learning the patterns of your behavior, it could alert you when something is amiss.

Comments

  1. This is a fabulous post I seen because of offer it. It is really what I expected to see trust in future you will continue in sharing such a mind boggling post ai chatbot online

    ReplyDelete

Post a Comment

Popular posts from this blog

7 Online AI Chatbots to Chat

We are living in a century where technology dominates lifestyle; artificial intelligence (AI) is one such example. It is an intelligence exhibited by machines or software to help make humans’ life easy. Apple brought its AI assistant SIRI into our daily lives, and Google and Microsoft have also came up with their own version of AI known as ‘Google Now’ and ‘Cortana’ respectively. The concept and application of AI is evolving. It can also be seen in some of the famous movies and shows like ‘Her’ directed by Spike Jonze, ‘Artificial Intelligence’ by Steven Spielberg, and many more. It shows how science is impacting our lives and accelerating a change in the lifestyle of people. Such websites are designed on the AI algorithm where you can interact with the machine in your leisure time for fun . Some of such AI chatbots are listed below. Breeze through them, you might just feel the urge of trying out one! Cleverbot Cleverbot is an AI, designed to have convers

Perceptron: the main component of neural networks

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The Perceptron, also known as the Rosenblatt’s Perceptron . Perceptrons are the most primitive classifiers, akin to the base neurons in a deep-learning system. What is Perceptron  A Single Neuron The basic unit of computation in a neural network is the neuron , often called a node or unit . It receives input from some other nodes, or from an external source and computes an output. Each input has an associated weight  (w), which is assigned on the basis of its relative importance to other inputs. The node applies a function  f (defined below) to the weighted sum of its inputs as shown in Figure 1 below: Basic elements of Perceptron Inputs : X1, X2 Bias : b Synaptic Weights :