Table of Contents

Meet Aritra- An Autonomous Mobile Robot (AMR) with Accurate Indoor Positioning and Navigation

The article outlines the evolution of robotics, emphasizing their diverse roles in simplifying and securing human life. It introduces ARITRA, an Autonomous Mobile Robot (AMR) developed by eInfochips and explores AMR technology. The article delves into Aritra's features, applications, distinct capabilities, technology stack, hardware, and potential future enhancements. Additionally, it reflects on the expanding role of mobile robots in industry and promotes eInfochips’ Robotics Centre of Excellence.

Robotics has advanced significantly over the last few decades; it is no longer limited to massive, immobile robots that must be kept far from humans and are managed by specialized programs. Robots have been instrumental in improving technology, displacing people from hazardous occupations, and automating complex procedures – the goal has always been to make human existence simpler and safer. .

In this essay, I will introduce you to ARITRA, an autonomous mobile robot, or AMR, developed by eInfochips. But before we go any further, let us first understand what An Autonomous Mobile Robot (AMR) is.

An Autonomous Mobile Robot (AMR) is a general name for any robot that can navigate its surroundings without needing direct human supervision or following a predefined course. AMRs can navigate around both stationary impediments (like buildings, racks, workstations, etc.) and changeable obstructions (like humans, lift trucks, and debris) because they have several advanced sensors that allow them to comprehend and interpret their environment. This allows them to conduct their tasks most effectively and along the most direct path possible. With a CAGR of 15.60% during the forecast period (2023-2028), the size of the autonomous mobile robot market is anticipated to increase from USD 3.36 billion in 2023 to USD 6.94 billion by 2028.

Although there are numerous parallels between AMRs and Automated Guided Vehicles (AGVs), there are also some significant distinctions. Flexibility is the most important of these differences: Unlike AGVs, AMRs are not compelled to adhere to strict, planned itineraries. Autonomous Mobile Robots (AMR) are designed to work in conjunction with humans during selecting and sorting tasks, in contrast to AGVs, and they select the best way to finish each task.

Let us deep dive and talk about Aritra- An Autonomous Mobile Robot that has been built by the eInfochips Robotics teams. This section will cover all what, why, how, and other interesting discoveries of our own

Autonomous Mobile Robot-Aritra.

1. What is Aritra?

There are several kinds of robots, one of them being AMR. Aritra is one such AMR (Autonomous Mobile Robot) that can map its environment, localize itself in that environment and navigate autonomously.

2. What does Aritra do?

Autonomous Navigation, in robotics terms, is the task of moving from one location to another location without any human intervention; and Positioning is to estimate the location of particular objects, as well as its own location relative to those objects.

So Aritra can target a location  and can deliver, pick, or place something in that location. For example, if there is a huge warehouse and something needs to be retrieved from, say 2000m away, Aritra can help you with that.

3. What are the applications of Aritra?

AMR is a broad term. For different industries, there are different types of AMR. Application-specific and industry-wise AMRs can be built-to-specifications. In its current form, Aritra can move, pick, and put things around and carry a 50kg load. It can easily be modified to suit different needs and applications.

4. How Aritra is different?

At this stage, Aritra is flexible and designed in such a way that it can be used with different processors and embedded platforms such as Nvidia, NXP, Qualcomm, and Ambarella processors.

There was no specific task or need for application-oriented requirements when Aritra was designed. So, we have made Aritra so flexible in a such way that if, say, an embedded developer wants to assess the capabilities of their processor on the robot, it can be done.

It is flexible in terms of sensors, it is multifunctional we can attach Lidar, RGB cameras, ToF, and depth cameras. So, Aritra can be deployed in multiple operational scenarios – possibly even in environments where it might be hazardous for humans to operate.

Since the entire electronics PCB part of the Aritra has been developed by eInfochips, we know the technology intimately and can easily create or modify the design based on the specifications or requirements.

5. What are its capabilities?

In it’s most basic form, Aritra can navigate autonomously from point A to point B while performing object identification and avoiding obstacles. It also has VSLAM (Visual Simultaneous Localization and Mapping) and can carry a 50kg load.

6. What is the technology stack used?

The major software stack used in this robot was ROS2 (Robotics Operating System). It is an open-source robotic framework that can work with high-level processors such as the Nvidia platform. It can communicate with lower-level controllers such as motors, motor drivers, and any microcontroller and can also communicate with high-level sensors such as cameras and LiDAR.

ROS2 comes as a middle layer. It is a middle layer between the low-level Linux layers to the high-level application layers. It fits in between. It is a collection of a huge number of open-source algorithms which are specifically designed for robotics tasks.

Aritra has used various software modules, sensors, Libraries, AI/ML frameworks, and models to develop various applications such as map generation, navigation, Autonomous charging, and parking. It can take that information and compute certain business logic based on the use cases.

Algorithms: VSLAM, Stereo depth estimation, Sensor fusion, RTAB-MAP Navigation algorithm.

Edge computing capabilities: Aritra supports Edge computing to execute multiple use cases, and AI/ML models on CPU, GPU, and DSP based on the requirements.

ML deployment: End-to-end AI/ML pipeline from Data generation to model creation, training and deployment, segmentation, inferencing, and deployment is done for object identification/detection use case.

Sensor Integration: For AMR to detect their surroundings accurately and navigate, Various sensors have been used including a Depth camera, LiDAR 2D/3D, IMU, and Time-of-Flight (ToF) sensors.

Robot Simulation Framework: We used the omniverse issac sim to stimulate the deployment of robots. The tool can test numerous robotics applications in a simulated environment. It provides the ability to evaluate map generation algorithms and AI/ML algorithms.

Software Libraries: ROS2, ROS1, Cuda, Docker, OpenCL, and low-level drivers.

7. What hardware and SOM are used?

We used motors which are BLDC types of motors. There are drivers that drive those motors. We used a ESP 32 microcontroller that communicates between the high-level processor and low-level motors.

We used the NVIDIA SOMs and we have tested the stack on Qualcomm platforms as well.

We have used 2D LiDAR, Intel real sense depth camera, Analog devices Time of Flight camera (ToF camera), and Logitech RGB camera for specific tasks such as object detection. So overall, we have integrated four types of different sensors, and two different types of processors which are edge platforms, motor drivers, and lower-level micromotors.

Nvidia AGX Orin was used to generate the navigation map using RTAB Map. It runs on ROS2 high-level library. This also uses wheel odometry and IMU data to be fused with Visual odometry, A sensor fusion of IMU, Wheel odometry, and Visual Odometry.

8. What are the future capabilities that can be added for Aritra?

The future capabilities that we want to add include:

We are thinking about adding a manipulator, a robotic arm. Combined, these are known as co-bots.

Fleet management, where we can synchronize dozens of robots.

Cloud, as of now the robot can operate in a particular network. Whereas once we add cloud capabilities, it can be operated at any place remotely.

Final Thoughts

Mobile robots, which enable us to mix machines and human activity in one location without concern about severe accidents or safety violations, are unquestionably the future of the industry. Additionally, they have the independence to roam about their environment and easily perform monotonous duties. They have evolved from science fiction fantasies to reality, improving in dependability, effectiveness, and affordability. But keep in mind that every robot is different, equipped with distinct algorithms and solutions that specify its capabilities. Imagine a time not too far from now when discussing mobile robots will be as frequent as discussing smartphones at the moment.

eInfochips has created a Robotics Centre of Excellence (CoE) and a dedicated team of subject matter specialists in the areas of hardware design, sensor integration, AI/ML, and camera development to facilitate the implementation of AMR. Our team has created several proof-of-concepts by utilizing the robotics-specific hardware and software toolchains of prominent platform vendors like NVIDIA, Qualcomm, and ADI, with whom we are strategic partners.

Explore More

Talk to an Expert

Subscribe
to our Newsletter
Stay in the loop! Sign up for our newsletter & stay updated with the latest trends in technology and innovation.

Our Work

Innovate

Transform.

Scale

Partnerships

Device Partnerships
Digital Partnerships
Quality Partnerships
Silicon Partnerships

Company

Products & IPs

Services