Industry Use-Case of Neural Networks

Naveen Pareek
10 min readOct 10, 2021

--

Before discussing the use-cases of Neural Networks first we have to understand the actual meaning of Neural networks in brief.

As we all see in this current era of technology, neural networks or we can say Artificial Intelligence plays a vital role. These technologies bring human lives to the next level either we are playing a video game or driving a car, everywhere we are using AI. In all these daily tasks neural network is somehow used.

Neural networks reflect the behaviour of the human brain, allowing computer programs to recognize patterns and solve common problems in the fields of AI, machine learning, and deep learning.

In this article, we are going to cover almost all the aspects of neural networks and also gonna discuss their future benefits and an industrial case study on Neural Networks.

What are neural networks?

Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another.

The most groundbreaking aspect of neural networks is that once trained, they learn on their own. In this way, they emulate human brains, which are made up of neurons, the fundamental building block of both human and neural network information transmission.

Artificial neural networks (ANNs) are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network.

Neural networks rely on training data to learn and improve their accuracy over time. However, once these learning algorithms are fine-tuned for accuracy, they are powerful tools in computer science and artificial intelligence, allowing us to classify and cluster data at a high velocity. Tasks in speech recognition or image recognition can take minutes versus hours when compared to manual identification by human experts. One of the most well-known neural networks is Google’s search algorithm.

How do neural networks work?

Now that we have an idea on how the basic structure of a Neural Network look likes, we will go ahead and explain how it works. In order to do so, we need to explain the different types of neurons that we can include in our network.

The first type of neuron that we are going to explain is Perceptron. Even though its use has decayed today, understanding how they work will give us a good clue about how more modern neurons function.

A perceptron uses a function to learn a binary classifier by mapping a vector of binary variables to a single binary output and it can also be used in supervised learning. In this context, the perceptron follows these steps:

  1. Multiply all the inputs by their weights w, real numbers that express how important the corresponding inputs are to the output,
  2. Add them together referred as weighted sum: ∑ wj xj,
  3. Apply the activation function, in other words, determine whether the weighted sum is greater than a threshold value, where -threshold is equivalent to bias, and assign 1 or less and assign 0 as an output.

We can also write the perceptron function in the following terms:

b is the bias and is equivalent to the -threshold, w.x is the dot product of w, a vector which component is the weights, and x, a vector consisting of the inputs.

In consequence, a perceptron can analyze different evidence or data and make a decision according to the set preferences. It is possible, in fact, to create more complex networks including more layers of perceptrons where every layer takes the output of the previous one and weights it and make more and more complex decisions.

What wait a minute: If perceptrons can do a good job in making complex decisions, why do we need other types of neurons? One of the disadvantages of a network containing perceptrons is that small changes in weights or bias, even in only one perceptron, can severely change our output going from 0 to 1 or vice versa. What we really want is to be able to gradually change the behaviour of our network by introducing small modifications in the weights or bias. Here is where a more modern type of neuron come in handy (Nowadays its use has been replaced by other types like Tanh and lately, by ReLu): Sigmoid neurons. The main difference between a sigmoid neuron and a perceptron is that the input and the output can be any continuous value between 0 and 1. The output is obtained after applying the sigmoid function to the inputs considering the weights, w, and the bias, b. To visualize it better, we can write the following:

So, the formula of the output is:

Most deep neural networks are feedforward, meaning they flow in one direction only, from input to output. However, you can also train your model through backpropagation; that is, move in the opposite direction from output to input. Backpropagation allows us to calculate and attribute the error associated with each neuron, allowing us to adjust and fit the parameters of the model(s) appropriately.

Types of neural networks

Neural networks can be classified into different types, which are used for different purposes. While this isn’t a comprehensive list of types, the below would be representative of the most common types of neural networks that you’ll come across for its common use cases:

The perceptron is the oldest neural network, created by Frank Rosenblatt in 1958. It has a single neuron and is the simplest form of a neural network:

Feedforward neural networks, or multi-layer perceptrons (MLPs), are what we’ve primarily been focusing on within this article. They are comprised of an input layer, a hidden layer or layers, and an output layer. While these neural networks are also commonly referred to as MLPs, it’s important to note that they are actually comprised of sigmoid neurons, not perceptrons, as most real-world problems are nonlinear. Data usually is fed into these models to train them, and they are the foundation for computer vision, natural language processing, and other neural networks.

Convolutional neural networks (CNNs) are similar to feedforward networks, but they’re usually utilized for image recognition, pattern recognition, and/or computer vision. These networks harness principles from linear algebra, particularly matrix multiplication, to identify patterns within an image.

Recurrent neural networks (RNNs) are identified by their feedback loops. These learning algorithms are primarily leveraged when using time-series data to make predictions about future outcomes, such as stock market predictions or sales forecasting.

History of neural networks

The history of neural networks is longer than most people think. While the idea of “a machine that thinks” can be traced to the Ancient Greeks, we’ll focus on the key events that led to the evolution of thinking around neural networks, which has ebbed and flowed in popularity over the years:

1943: Warren S. McCulloch and Walter Pitts published “A logical calculus of the ideas immanent in nervous activity ” This research sought to understand how the human brain could produce complex patterns through connected brain cells, or neurons. One of the main ideas that came out of this work was the comparison of neurons with a binary threshold to Boolean logic (i.e., 0/1 or true/false statements).

1958: Frank Rosenblatt is credited with the development of the perceptron, documented in his research, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain”. He takes McCulloch and Pitt’s work a step further by introducing weights to the equation. Leveraging an IBM 704, Rosenblatt was able to get a computer to learn how to distinguish cards marked on the left vs. cards marked on the right.

1974: While numerous researchers contributed to the idea of backpropagation, Paul Werbos was the first person in the US to note its application within neural networks within his PhD thesis.

1989: Yann LeCun published a paper illustrating how the use of constraints in backpropagation and its integration into the neural network architecture can be used to train algorithms. This research successfully leveraged a neural network to recognize hand-written zip code digits provided by the U.S. Postal Service.

Advantages of Neural Networks

There are various advantages of neural networks, some of which are discussed below:

1) Store information on the entire network

Just like it happens in traditional programming where information is stored on the network and not on a database. If a few pieces of information disappear from one place, it does not stop the whole network from functioning.

2) The ability to work with insufficient knowledge:

After the training of ANN, the output produced by the data can be incomplete or insufficient. The importance of that missing information determines the lack of performance.

3) Good falt tolerance:

The output generation is not affected by the corruption of one or more than one cells of an artificial neural network. This makes the networks better at tolerating faults.

4) Distributed memory:

For an artificial neural network to become able to learn, it is necessary to outline the examples and to teach them according to the output that is desired by showing those examples to the network. The progress of the network is directly proportional to the instances that are selected.

5) Gradual Corruption:

Indeed a network experiences relative degradation and slows over time. But it does not immediately corrode the network.

6) Ability to train machine:

ANN learn from events and make decisions through commenting on similar events.

7) The ability of parallel processing:

These networks have numerical strength which makes them capable of performing more than one function at a time.

How Facebook uses Deep Learning models to Engage Users?

At Facebook it’s all about user engagement, and to accomplish this, the company relies heavily on deep learning algorithms to tailor its products to the interests of individuals.

Facebook achieved web dominance by riding a business model of understanding users and feeding them tailored content and advertising. And as the social networking company further builds on its strong position, it leans heavily on deep learning models.

“These kinds of deep learning techniques have been really important over the last couple of years,” — Andrew Tulloch, an artificial intelligence researcher at Facebook.

Understanding text with deep learning

It’s not all about images and videos, though. Facebook also uses natural language processing algorithms to interpret textual content and improve the quality of posts shown to users.

Tulloch said Facebook uses an NLP system built around neural networks to identify posts that are excessively promotional, spam or clickbait. The deep learning model filters these types of posts out and keeps them from showing in users’ news feeds.

“There’s a huge amount of textual content that’s being uploaded on Facebook every day, and understanding that is important to improving customer experience,” Tulloch said.

Outside of the news feed, deep learning models are helping Facebook develop products by enabling developers to understand content at a large scale.

Deep learning for computer vision

For example, computer vision neural network deep learning models are used to interpret the content of photos users has posted and decide which to surface in the “on this day” feature. This Facebook feature shows users’ posts that they made on the same day in past years, but Tulloch said it’s important that it not resurface potentially negative memories.

So the models underlying the feature have to interpret images and develop a semantic understanding of what’s happening to ensure it’s something people would want to be reminded of. It does this in part by identifying people and objects in images and interpreting the context around them.

The models were trained on more than a billion photos that have been uploaded to Facebook over the years, and they have to score in real-time millions of new images uploaded each day. Tulloch said this is a huge technical challenge, but one for which the convolutional neural networks his team uses are well-suited.

“The scale of this problem is massive,” he said. “But these kinds of computer vision systems are really powerful in understanding what’s going on.”

It all comes back to keeping users engaged on the social network. Tulloch said deep learning has played an important role in Facebook’s ability to do so, filling a crucial need in the company’s business model. “A lot of the challenge comes from surfacing the right content at the right time,” he said.

Thank You!

Keep Learning & Sharing…

If this article is useful for you then don’t forget to press the clap 👏 icon and also follow me for more such amazing articles.

Leave a comment if you have any doubts or you can connect me on LinkedIn.

--

--

No responses yet