The need for edge AI has evolved due to challenges in centralized cloud-based architecture and to provide immersive and pervasive experience to consumers. Below are a couple of motivations to shift AI to the edge from the cloud.
- Latency: In centralised IoT architecture where sensor data processed at the cloud requires a huge amount of data transfer, which includes latency and interrupts the performance of mission-critical and non-mission critical applications. Applications like traffic congestion update and thermostat smart home device requires to take real-time decision and does not requires to send all data to the cloud for processing. With edge AI architecture, data is processed faster, and a real-time decision is taken in the user environment.
- Cost: Edge AI-enabled devices infrastructure reduces huge data transmission costs by processing data at the edge. This is important for a particular use case of HD video processing and analysis, where streaming to the cloud needs huge bandwidth. It also reduces the cost of managing big data centers and hefty energy consumption.
- Consumer Data Privacy and Security: As data is processed at the edge and private information of consumers is not being transmitted to the centralized infrastructure, the risk of misuse can be reduced.
- AI Technology Innovations: With the latest innovations in AI chip at the edge in terms of reduced form factor, low power consumption, less heat dissipation, reduced cost, and higher data processing speed ignite the use of AI at the edge.
Growing applications of AI with neural network (NN) and machine learning algorithm processing at the edge in Smart Home, Consumer, Automotive and Healthcare industry has increased need of fast computing and high-speed processors. As per the Markets and Markets latest report, Edge AI hardware shipment will witness 1.55 billion units by 2024 with 20.64% CAGR during the forecast period. Edge AI market is classified into general-purpose GPUs focused on enterprise applications and application-specific ICs (ASICs) to perform edge inference. GPUs provide high compute capacity and run enterprise training workloads while ASICs are edge-optimized chip to run key workloads.
How Edge AI will Enhance User Experience in Smart Home
Edge AI is widely used in home and consumer devices such as surveillance cameras, smart speakers, wearables, and gaming consoles AR-VR headsets, drones, home automation robots. To enhance HMI and customer experience, major edge AI applications are based on computer vision and Natural Language Processing (NLP).
Smart kitchen appliances with AI such as microwave oven, refrigerator, and dishwasher can serve plenty of functions and enhance user experience. With help of AI in a microwave oven, consumers can get various cooking recipe suggestions and overcook warning based on food placed in it. Oven recognizes and identifies the food and its condition with AI camera trained on RetinaNet or Faster RCNN algorithm. Another example is a smart refrigerator, which can propose a lunch or dinner menu based on food items available and identifies the quality of food available in the refrigerator. Food image classification can be achieved by training algorithms on Edge AI hardware using VGGNet or AlexNet.
Edge AI-enabled Smart TV can detect and upgrade the low-resolution video streams to high-resolution streams and make better use of TV displays. The set-top box can enhance user experience with video enhancements for content viewing, voice enhancements for voice command integration from users. A similar approach is applicable for audio-video conferencing systems to enhance picture quality for an enhanced collaborative experience.
Smart speakers and voice assistants can also provide a better human interface via edge-based machine learning methods and personalize them to match with the best user preferences.
AI on the edge devices helps to make a real-time and quicker decision with available data insights. Video analytics-enabled home cameras help to detect important activities such as detect a person than an object and real-time face recognition from moving images. If these live video streams are sent to cloud for processing, then it consumes huge bandwidth. Further with smart detection and intelligence at the edge, it only streams important activities to the cloud and not idle frames saving bandwidth. Another very promising application of a security camera is elderly and baby monitoring for caregivers. The AI enabled camera helps to analyse fall detection, loitering detection, detecting long usage of toilet (external monitoring), restricted areas access and monitor sleeping condition of a baby or an elderly person.
With help of edge AI and audio analytics from available data sources, it can detect and alert the sound of a baby crying, glass breakage, gunshots, and other malicious sounds.
With the advancement in edge AI hardware, wearable applications have increased significantly. Until now, the market was dominant with only smartwatches and fitness band, which are majorly used for measuring vitals and its analysis. Advancement in research and increased innovation driving the demand for smart wearable devices in market such as smart glasses, smart headset, head-mount display, body worn cameras and medical devices.
With emerging trend of wearable technology and innovation in new product categories, AI will fuel multiple applications of these wearables. Couple smart glasses applications are it can help the visually impaired to navigate the road with voice cues and can help machine operator to visually detect the abnormalities in critical parameters. Smart fitness band analyse lifestyle of user and can inform about sitting, sleeping and eating habit improvements.
As we have seen in above use cases, edge AI requires multiple data points (video, audio, numerical and text) as inputs to the device. To improve HMI using AI in smart home and consumer devices, integration of video, voice, and text data is essential by running multiple intelligent processing algorithms. This approach is called multimodal AI and with the help of all these modalities, HMI can become more contextually aware and aware of individual behaviour.
With advancement in 5G network and edge AI hardware, home and consumer devices will become more prominent for device manufacturers and the market for AI-enabled devices will grow. We at eInfochips help global companies in discovering their connected digital roadmap with engineering consultation, development, and testing. We have helped our clients to develop 30+ world’s first designs, which have disrupted the market. With our strong partnership with leading AI hardware platform providers such as Qualcomm, Nvidia, and our strong expertise to design compact SoCs and SoMs, we help our clients to enable AI in their products. We have extensive expertise in edge AI environments of Qualcomm SNPE, AWS Greengrass, Microsoft Edge IoT, and AWS Alexa. Along with that, we have rich experience in various AI libraries and frameworks for ML training algorithm development, porting, optimization, deployment, and sustenance.
Connect with our experts today to know more about our service offerings in the edge AI landscape.
This post was last modified on November 10, 2020 7:05 am