Table of Contents

How to Mitigate Latency in Cloud-based Video Management Systems

Latency in video management system is the delay between the time a frame is captured from the camera source and the time the same frame is displayed. Every enterprise that uses video surveillance systems aims for zero or low latency for their video feeds. This blog explains types of latency issues in cloud-based video management systems and ways to mitigate them.

The video management software market is witnessing a significant transition from analog to IP based video surveillance. The inherent benefits of IP based Video Surveillance such as scalability, flexibility, ease of installation, remote access, video analytics, and cost-effectiveness are making it a viable choice among users over analog-based surveillance systems.

With so much to offer, the structure of cloud-based video management software is quite complex and creates problems like latency in the video feeds, which is more prominent in wireless networks as compared to wired analog networks.

Latency in video feeds affects real-time video surveillance as it can delay the reaction time towards critical events captured by the system. It not only affects the prime purpose of VMS i.e. surveillance, but it can also hamper the efficiency of the connected third-party systems. Sometimes, latency can result in loss of data in the captured videos, while in some cases, its increased frequency can make the entire VMS application unusable.

This blog will focus on understanding the latency issues in the Video Management Software and measures to reduce it to the minimum.

What is latency in a cloud-based video management system?

Latency in a cloud-based Video Management System can be defined as the delay (in time) between the event captured by the IP cameras and its display on the system application (display monitor through web and mobile apps). It can be measured in seconds or milliseconds (ms). The latency of the entire Video Management System can be a collective outcome of multiple processes in the system like image capture, encoding, video compression, decoding, video decompression and display.

The major contribution to end-to-end latency in the entire VMS system is done at four stages i.e IP Cameras, transmission network, streamer servers and display systems. Let us check out these stages in detail:

  1. IP Camera Latency
    IP cameras are intelligent devices that are comprised of camera sensors, video compression codecs (encoders), and built-in camera CPU. They can capture, compress, and send the necessary video images to the IP network or cloud at a specified frame rate. Latency in IP cameras is due to delay in processing the video images through the following components:

    • IP Camera Sensors: IP camera sensors are the electronic devices, which convert light into electrons, or capture pixels of light to form the image. Most of the IP cameras use two types of sensors, CCD (Charged Couple Devices) and CMOS (Complementary Metal oxide Semiconductor devices) sensors. CMOS sensors are more in trend because of the features like low latency, low power usage, faster frame rates, and lower costs.Latency introduced by camera sensors is referred to as capture latency. More fps in IP cameras results in better video quality, resulting in less “capture latency”.
    • Video Compression Codecs: Video Compression codecs are devices or algorithms for encoding digital video signals inside an IP camera. Video codecs automatically compress the video files by eliminating unnecessary or duplicate frames. There are three types of video compression codecs, H.264, MJPEG, and MPEG4, out of which H.264 is considered as the most efficient compression codec. Latency introduced in this section during the encoding of the video files is referred as compression latency.
    • Camera CPU: The role of CPU of an IP camera is to transmit the encoded data from codecs to the network. Other than supporting to the data transmission to the network, CPU also performs important tasks like de-interlacing, noise filtering, etc. When the CPU keeps the tasks, other than encoded video data processing on high priority, the encoded data keeps on buffering, resulting in latency.

Multiple camera manufacturers have a limitation on the number of clients connected to cameras. With the increase in the number of clients, camera feed is compromised in terms of jitter, lag and video quality. Usage of a video streamer can mitigate the issue as it can take streams from multiple cameras and stream to an unlimited number of clients. So camera limitations can be mitigated by streamers since they work on high-end server machines.

  1. Network Latency
    Network latency in a cloud-based video management system can be defined as the delay caused during transmission of IP video signals from IP camera to the receiver’s end through the end-to-end network. Video data in cloud video surveillance has to pass through the network (Cloud architecture) comprised of the streaming server (live feeds), playback server (recorded feeds), cloud storage, and Content Delivery Network (CDN) before reaching to the client side for display.In on premise VMS systems, where video data is transferred over LAN, the network latency can be of a few milliseconds. On the other hand, in cloud-based VMS or hybrid VMS installation, where video data needs to travel across the entire cloud infrastructure via routers, switches, and servers through the internet, the network latency can be of a significant amount.Latency in a VMS system depends on the bandwidth of the network and the amount of video data produced by the IP cameras for transferring (in bitrate). Allowing extra bandwidth capacity to the network can help in accommodating higher bitrate loads through video data traffic. Usage of H.264 codecs can also help in reducing the average bitrate of the data, which can help in reducing the network latency.More bandwidth in the network and good connection speed (with higher bitrate) can help in accommodating more data from the IP cameras to process through the network, resulting in reduced latency.
  1. Streamer Server Latency:
    Streaming services on the cloud generally receive feeds from cameras and process these feeds to be transmitted to various viewing clients with multiple requirements in resolutions, codecs, bit rates, etc. For such operations, these servers receive streams, decode, scale and encode into various codecs and then transmit to viewing clients. This processing introduces latency from milliseconds to a second depending upon the software optimization achieved in streaming services and computing potential of the hardware. One possible way to mitigate these delays is by using hardware acceleration.
  2. Display Latency
    At the receivers end, the cloud-based video management system receives compressed video data, which is then unpacked, ordered, decoded and displayed on the system screen (computer, mobile, tab etc.). The latency of video data in the process of decompression and display is dependent on the video resolution, frame rate, software decoders and configuration of the system (processor, RAM, graphics cards, etc.). The monitor device refresh rates and operating systems also play a crucial role in controlling the display latency of the system.Though it may be difficult to achieve zero latency in a Video Management System, it can definitely be reduced to the minimum by installing efficient sub-systems compatible and rightly placed in an optimized VMS environment structure. Enterprises aspiring for high performing VMS with minimum latency issues must consult with their VMS service providers and systems integrators while selecting hardware and software components for each section of their Video Management System i.e. IP cameras, network, streamers, and display systems.

eInfochips offers micro-services and multi-tenant federated architecture based video management solutions that help address the issues of latency in video feeds by running applications independently and supporting sub-systems of VMS architecture. Our solutions ensure that all the applications of VMS are scalable, compatible, and efficient to meet the real-time as well as offline video surveillance needs. To know more, download the brochure of Snapbricks VMS.

Picture of Anshul Saxena

Anshul Saxena

Anshul Saxena is working as Assistant Marketing Manager at eInfochips. He has more than 9 years of experience in corporate marketing, inbound marketing, digital marketing, and business development. Anshul holds an Engineering degree along with MBA in Marketing.

Explore More

Talk to an Expert

Subscribe
to our Newsletter
Stay in the loop! Sign up for our newsletter & stay updated with the latest trends in technology and innovation.

Start a conversation today

Schedule a 30-minute consultation with our Industrial & Energy Solutions Experts

Start a conversation today

Schedule a 30-minute consultation with our Automotive Industry Experts

Start a conversation today

Schedule a 30-minute consultation with our experts

Please Fill Below Details and Get Sample Report

Reference Designs

Our Work

Innovate

Transform.

Scale

Partnerships

Device Partnerships
Digital Partnerships
Quality Partnerships
Silicon Partnerships

Company

Products & IPs