Table of Contents

Data Annotation in Autonomous Cars

Autonomous and semi-autonomous vehicles are loaded with systems that play a key role in enhancing the driving experience. This is made possible due to the presence of multiple cameras, sensors, and various other systems. All these components produce loads of data. One such example is the ADAS system, which operates on computer vision. It uses a computer to gain high-level understanding of the images and, analyzing different situations, to alert the driver making his decision making more effective.

What is an annotation?

The functionalities of autonomous and semi-autonomous vehicles are made effective due to annotations. Annotation is the labeling of the region of interest/object of interest in an image or video by using boundary boxes and defining other attributes to help the ML models understand and recognize the objects detected by the sensors in the vehicle. Analytics like facial recognition, movement detection, and others require high-quality data that is properly annotated.

So, without properly annotated data, autonomous driving would be ineffective to the point that it would be almost non-existent. The effectiveness of the data is what ensures a smooth driverless experience.

Why is annotation used?

Modern vehicles produce large amounts of data due to the presence of multiple sensors and cameras. Unless these data sets are properly labeled to be further processed, they cannot be put to effective use. These data sets must be used as part of a testing suite to generate training models for autonomous vehicles. Various automation tools can help in labeling the data, as manually labeling them would become a gargantuan task.

Some great open-source tools such as Amazon SageMaker Ground Truth, MathWorks Ground Truth Labeler app, Intel’s Computer Vision Annotation Tool (CVAT), Microsoft’s Visual Object Tagging Tools (VoTT), Fast Image Data Annotation Tool (FIAT), and Scalabel by DeepDrive, among others, can help you automate your labeling process.

How is annotation done?

For an autonomous vehicle to reach from point A to point B, it needs to master the surroundings perfectly. A typical use case of a driving function that you want to implement in a car may require two identical sensor sets. One will be your sensor set under test, and the other sensor set will act as a reference.

Now let us assume that a car travels 3,00,000 kilometers at an average speed of 45 kilometers per hour in varying driving conditions. Using these numbers, we will know that the vehicle took 6700 hours to cover the distance. The car may also have multiple cameras and LIDAR (Light Detection and Ranging) systems and if we assume that they record at a bare minimum of 10 frames per second for those 6700 hours, 240,000,000 frames of data would have been generated. Assuming that each frame may have, on an average, 15 objects, which includes other cars, traffic lights, pedestrians, and other objects, then we will end up with over 3.5 billion objects. All these objects must be annotated.

Just annotating is not enough; it has to be accurate too. Until this is done, no meaningful comparison can be made between the sensor sets we have on the vehicle. So, what if we had to manually annotate each object?

Let us try and understand how manual annotation is done. The first step would be to navigate through the LIDAR scans pulling up the corresponding camera footage. Assuming that the LIDAR is 360 degrees, there will be a multi-cam set-up that will give the footage based on the LIDAR perspective. Once the LIDAR scans and the camera footage have been pulled up, the next task would be to match the LIDAR perspective to the cameras. When you know where the objects are located, the second task would be to do object detection and put 3D bounding boxes around each of them.

Now just putting bounding boxes and a generalized annotation as a car, pedestrian, stop signs, etc. may not be enough. You will need proper attributes that best describe the object. In addition to this, you will need to understand braking lights, stop signs, moving objects, static objects, emergency vehicles, classification of lights, what caution lights does the emergency vehicle have, etc. This needs to be an exhaustive list of objects and their corresponding attributes, where each attribute has to be addressed one at a time. That means that we are talking about a lot of data.

Once this is accomplished, you also need to ensure that you have the correct annotations; another person will need to check if the annotated data is correct. This ensures that there is minimal scope of error. If this activity is done manually at an average of 60 seconds for each object, we will need to spend 60 million hours or just over 6,849 calendar years for the 3.6 billion objects we discussed earlier. Thus, annotating manually seems implausible.

How does Automation Help?

From the above-mentioned example, we understand that it is highly unlikely to manually annotate data.  Various open-source tools can help us in this activity. Objects can be automatically detected despite different angles, low resolution, or low light conditions. This is made possible due to deep learning models. When it comes to automation, the first step would be to create an annotation task. Start by naming the tasks, specifying the labels and the attributes associated with them. Once you have done this, you are now ready to add the repository of data that you have available that needs to be annotated.

Apart from this, many additional attributes can be added to the task. Annotation can be done using polygons, boxes, and polylines. Different modes namely interpolation, attribute annotation mode, segmentation among others.

Automation reduces the average time taken to annotate data. Incorporating automation will at least save you 65 percent of the efforts and mental fatigue.

Wrapping up

To make this a reality, automation tools mentioned earlier in this blog can help achieve annotation at scale. Along with this, you need a team that is experienced enough to enable data annotation at a large scale. eInfochips has been a product engineering partner for many global leaders with capabilities across the Product Lifecycle, from the Product Design to the Quality Engineering phase, and across the value chain, from Device to Digital. eInfochips also has expertise in AI and machine learning and has engaged with various automotive companies to deliver world-class solutions. To know more about our automotive and data annotation services and AI/ML expertise, get in touch with our experts.

Explore More

Talk to an Expert

Subscribe
to our Newsletter
Stay in the loop! Sign up for our newsletter & stay updated with the latest trends in technology and innovation.

Our Work

Innovate

Transform.

Scale

Partnerships

Device Partnerships
Digital Partnerships
Quality Partnerships
Silicon Partnerships

Company

Products & IPs

Services