Table of Contents

5 Deep Learning Trends that will Rule 2019

Deep learning, powered by deep neural networks, can deliver significant benefits to organizations on their transformation journey. Trends related to transfer learning, vocal user interface, ONNX architecture, machine comprehension and edge intelligence will make deep learning more attractive to businesses in the near future. There is no doubt that we will continue to see a growth in the application of deep learning methods in 2019 and beyond.

The term ‘deep learning’ and ‘deep neural networks’ have been around quite a while now.

Deep neural networks can deliver significant benefits to businesses; in fact, many businesses are taking advantage of deep learning for more effective pattern recognition, recommendation engines, translation services, fraud detection and more.

While there’s no denying the advantages that deep neural networks bring to the table, their adoption by organisations or industries has been relatively narrow in nature.

Even today, traditional machine learning methods and applications continue to lead the way.

And this is understandable. It is difficult for any industry or organization to swap-switch-implement any technology – it cannot be done as quickly as publishing a research paper.

However, the use of deep learning and deep neural networks became more and more prevalent during the last few years.

Let’s take a look at some trends that really took off in 2018.

1) Transfer learning

Transfer learning is widely popular machine learning technique, wherein a model, trained and developed for a particular task, is reused for performing another similar task.

For example, if you have trained a simple classifier to detect whether an image contains car objects, you could use the knowledge that the model gained during its training to recognize other objects like trucks.

This technique gained popularity as it enables a quick learning approach with deep learning. Here, pre-trained models from open source networks are used as the starting point on computer vision and natural language processing tasks.

It is not easy for organizations to train and develop models that require access to a large volume of data and computational power.

For organizations that are taking the first steps towards digital transformation, this can often be a significant challenge. The ability of transfer learning to address these issues has contributed to its increased adoption.

You would typically use transfer learning when:

  • You don’t have enough labelled training data to train your network from scratch.
  • There already exists a network that is pre-trained on a similar task, which was trained on a massive amount of data.
  • Task-1 and Task-2 have the same input.

The use of transfer learning has led to many innovative and quick solutions, allowing organizations to successfully adopt Artificial Intelligence in their digital transformation journey.

2) VUI

A VUI (Voice User Interface or Vocal User Interface) is the interface for any speech application. Technically, VUI can either be a primary or a supplementary interface to enable natural voice communications between humans and machines.

As a use case, VUI can refer to something as as simple as a light that activates on a voice command, or it can refer to  an entire voice-enabled automobile infotainment console.

The important thing to understand is that VUI does not always require a visual interface – it can be independent, i.e. fully auditory or tactile (vibratory, for example) interface.

VUI has suddenly gained a whole new momentum due to the rise of voice assistant services like Alexa, Google Assistant and Siri. These voice services have completely re-defined user interfaces. It has enabled devices and appliances to have natural communications with users, bridging the gap between human and machines.

To make this possible, extensive deep learning algorithms – speech recognition, language modelling and translation – work in the background.

Every day, new functions are getting added to voice assistants. Such functions are popularly known as “skills”. Alexa has around 50K+ skills till date and the number is set to increase over time. These skills have also added to the popularity and adoption of VUI.

VUI was primarily a system adopted in smartphones only, but now it has also entered home automation systems, smart speakers, electronic appliances and other devices. While VUI based applications are a still a bit narrow in nature in the current industrial scenario, there is great potential for more disruptions in 2019 and beyond.

3) ONNX architecture

According to GitHub, “Open Neural Network Exchange” is an open format to represent deep learning models.

With ONNX, AI developers can more easily move models between state-of-the-art tools and choose the combination that is best for them.” Proposed by Microsoft and Facebook in September-2017, ONNX models are now seeing increased adoption in commercial and open neural network libraries.

Assume you have trained and developed a deep learning model using TensorFlow library, and the model constructs are such that it will execute in the TensorFlow library only. What if your execution environment is not TensorFlow based and is backed by a different library?

DOWNLOAD CASE STUDY

Wind Turbine Fault Detection Using Machine Learning and Neural Networks

Download Now

ONNX completely resolves this issue. It allows interoperability between models of different libraries. It removes technical dependency on certain libraries, thus giving organizations a free hand to test & experiment with different models without changing their set environment.

Almost all major libraries now support ONNX models and this will be a game-changer in the times to come.

4) Machine comprehension

Machine Comprehension/Machine Reading Comprehension/Machine Reading are AI models that give a computer the ability to read a document and answer questions against it. While this is a relatively elementary task for a human, it is not that straightforward for AI models.

Stanford Question Answering Dataset (SQuAD) is “a reading comprehension dataset, consisting of questions posed by crowd workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.”

Almost every important organization on earth working in the AI field – including the pioneers in AI (Google, AWS, Microsoft, Facebook, etc.), research institutes and colleges – entered this competition to develop a machine comprehension system on SQuAD dataset that would beat human accuracy.

A breakthrough was achieved on 5th January 2018 when an AI model from Alibaba outperformed humans in reading comprehension. This model was an SLQA+(ensemble) based and it recorded an exact match of the human score, which is  ~82% on SQuAD dataset.

Google’s BERT (ensemble) model, submitted on 5th Oct 2018, currently tops the Leaderboard on SQuAD 1.1 and it surpasses human performance. The competition has now reached a new level with SQuAD 2.0.

This was truly an achievement in 2018; in 2019, we will continue to see more practical applications of machine comprehension as an extension to virtual assistant applications.

5) Edge intelligence

Edge Intelligence (EI) changes the way data is acquired, stored and extracted – it shifts the process from the storage devices in the cloud to the edge (e.g. a camera, or a heat sensor).

EI proposes to make edge devices somewhat independent by moving decision-making closer to the data source, which reduces delay in communications and improves near to real time results.

Since the advent of IoT, there has been an enormous increase in the number of connected devices. This number is bound to increase exponentially in the years ahead.

Due to the current challenges faced in connecting a very large number of devices to the cloud, organizations are now more inclined towards solutions that utilize edge intelligence.

When we talk about edge intelligence, we draw parallels with edge computing. However, edge intelligence promises a lot more. Edge computing relates to computing at the edge, mostly gateways with some minimal processing. But edge intelligence goes one step ahead with edge devices that can truly work independently.

Amazon took a notable step in this direction by announcing the launch of its first innovative product named AWS DeepLens.

TechCrunch describes DeepLens as “small Ubuntu- and Intel Atom-based computer with a built-in camera that is powerful enough to easily run and evaluate visual machine learning models.”

This was the first truly recognized edge intelligent device to be launched commercially in the market, followed by the recent Vision AI Developer Kit by Microsoft.

There is no doubt that we will see more and more edge intelligent devices coming up for quick adoption in industries. To name a few, we already have devices like Intel NeuroCompute Stick and OpenMV’s cameras.

As we head into 2019, we are confident that the above trends will continue to gather momentum, fueling the spread and application of deep learning across industries. At eInfochips, we provide Machine Learning services that help organisations build innovative products and devise highly-customized solutions. If you need assistance with your AI/ML projects, please get in touch with us.

Explore More

Talk to an Expert

Subscribe
to our Newsletter
Stay in the loop! Sign up for our newsletter & stay updated with the latest trends in technology and innovation.

Our Work

Innovate

Transform.

Scale

Partnerships

Device Partnerships
Digital Partnerships
Quality Partnerships
Silicon Partnerships

Company

Products & IPs

Services