Today, the research and development spend on robotics and automation technology has seen an exponential increase. With advancement in technologies, we have seen the adoption of robots across various sectors including industrial, manufacturing, consumer and many more. The adoption of highly efficient robots will also help in addressing the skilled labor shortage in near future.
Artificial Intelligence (AI) has enabled applications and functionalities that were previously considered unreal and is impacting almost every industry. It has also made autonomous machines a reality and will also result in much higher efficiency, speed, and sustainability. Automation has also improved safety standards for workforce by providing an option of automated monitoring and alerts for potentially hazardous situations.
While we see many benefits and applications of autonomous machines, there are many technology constraints and limitations to get these machines to market. Many technology companies and forums have made significant progress in addressing these challenges. In this blog we would like to touch upon key consideration and technologies related to autonomous machines.
Platforms and Processors
There are many platform companies who have launched processors targeted for autonomous machines. These processors are multi-core, low power and form-factor that have dedicated AI-engine for high-performance computing. The platform needs to support multiple camera and sensors along with software pre-defined SDKs for faster deployment of various functions.
NVIDIA Jetson family is one of the platform options for autonomous machines. This small form-factor, high-performance processor offers pre-built NVIDIA JetPack™ SDK, pre-built AI models, sensor ecosystem, and camera partners to speed up development.
Qualcomm Robotics RB5 Platform is one more platform that supports the development of smart, power-efficient, and cost-effective robots by combining high-performance heterogeneous computing, on-device AI/machine learning, and advanced computer vision with robust security, multimedia, Wi-Fi and 4G and/or 5G cellular connectivity capabilities. The platforms also include support for range of sensors that offer real-time and highly accurate data on a single board—enabling developers to design smaller, more robust robots. The robotic platform is based on octa-core Qualcomm QRB5165 processor that offers powerful heterogeneous computing and comprises of dedicated Qualcomm® Artificial Intelligence (AI) Engine. Qualcomm’s AI engine promises to deliver 15 Trillion Operations Per Second (TOPS) making it an ideal choice for AI on-the-edge devices. TOPS is a simplified metric system that indicates the number of computing operations an AI chip can execute per second. The processor also offers a powerful image signal processor (ISP) that can support up to seven concurrent cameras, a dedicated computer vision engine for advanced video analytics, as well as the new Qualcomm® Hexagon™ Tensor Accelerator (HTA).
In order to eliminate the reinventing the wheel situation from which robotics was suffering (like complex robotics algorithms from scratch), Keenan Wyrobek and Eric Berger from Stanford came up with an open-source Robot Operating System (ROS) set of software libraries, tools and frameworks for robot applications. The OS offers niche algorithms, developer tools, proof of concepts that can help kick-start software development of autonomous machines. Over the period, different version of ROS were released to address the shortcomings of the previous generation and to effectively address real-life scenarios and industry needs.
At the same time, OpenCV (Open Source Computer Vision Library) acts as a toolkit for computer vision. It contains built-in classes and methods that can be used for image and video processing and analyses and can be leveraged for computer vision. Most of these algorithms are written in native languages like MATLAB, C/C++, Python, MATLAB, Java and can be ported on different operating systems like Android and Linux.
Tensorflow is commonly used for machine learning specifically the family of deep leaning algorithms. These will take a long time to finish and that’s where the use of GPUs come in because they provide better processing speed compared to CPUs
Robots require exhaustive information about their surroundings to function effectively. Sensors play an important role in estimating a robot’s condition and environment. Sensor senses various parameters that are then passed to a controller to enable appropriate behavior. Sensors can be classified into two categories based on their functions i.e.: internal sensors and external sensors.
Internal sensors are sensors that sense the information and vital statistics of the robot, such as position, speed sensor, angle sensors, etc. The external sensors are used to get information of robots surrounding like cameras, IR sensors, temperature sensors and many more. Analog devices, Sony, Omnivision, STMicroelectronics, Invensense are some of the leaders in sensor market.
eInfochips has developed its own Robotics Center of Excellence (CoE) comprising of strong team of experts and infrastructure. The team has developed robotics proof-of-concepts for various applications leveraging latest platforms and technologies. We have enabled multiple solutions across domains by offering end-to-end engineering services including hardware design, AI/ML enablement and camera development and image tuning. eInfochips also offers modules and development kits on latest platforms of Nvidia, Qualcomm and NXP to kick-start development. As part of Edge Labs initiative, we have also developed modules and kits on Qualcomm QRB5165, QRB4210 and QRB2210 to help with early prototyping and development. For more information refer – https://www.einfochips.com/domains/robotics-and-autonomous-machines/