Latency in video management system is the delay between the time a frame is captured from the camera source and the time the same frame is displayed. Every enterprise that uses video surveillance systems aims for zero or low latency for their video feeds. This blog explains common types of latencies in cloud-based video management systems and ways to mitigate them.
The video management software market is witnessing a significant transition from analog to IP based video surveillance. The inherent benefits of IP based Video Surveillance such as scalability, flexibility, ease of installation, remote access, video analytics, and cost-effectiveness are making it a viable choice among users over analog-based surveillance systems.
With so much to offer, the structure of cloud-based video management software is quite complex and creates problems like latency in the video feeds, which is more prominent in wireless networks as compared to wired analog networks.
Latency in video feeds affects real-time video surveillance as it can delay the reaction time towards critical events captured by the system. It not only affects the prime purpose of VMS i.e. surveillance, but it can also hamper the efficiency of the connected third-party systems. Sometimes, latency can result in loss of data in the captured videos, while in some cases, its increased frequency can make the entire VMS application unusable.
This blog will focus on understanding the latency issues in the Video Management Software and measures to reduce it to the minimum.
Latency in a cloud-based Video Management System can be defined as the delay (in time) between the event captured by the IP cameras and its display on the system application (display monitor through web and mobile apps). It can be measured in seconds or milliseconds (ms). The latency of the entire Video Management System can be a collective outcome of multiple processes in the system like image capture, encoding, video compression, decoding, video decompression and display.
The major contribution to end-to-end latency in the entire VMS system is done at four stages i.e IP Cameras, transmission network, streamer servers and display systems. Let us check out these stages in detail:
Multiple camera manufacturers have a limitation on the number of clients connected to cameras. With the increase in the number of clients, camera feed is compromised in terms of jitter, lag and video quality. Usage of a video streamer can mitigate the issue as it can take streams from multiple cameras and stream to an unlimited number of clients. So camera limitations can be mitigated by streamers since they work on high-end server machines.
Network latency in a cloud-based video management system can be defined as the delay caused during transmission of IP video signals from IP camera to the receiver’s end through the end-to-end network. Video data in cloud video surveillance has to pass through the network (Cloud architecture) comprised of the streaming server (live feeds), playback server (recorded feeds), cloud storage, and Content Delivery Network (CDN) before reaching to the client side for display.
In on premise VMS systems, where video data is transferred over LAN, the network latency can be of a few milliseconds. On the other hand, in cloud-based VMS or hybrid VMS installation, where video data needs to travel across the entire cloud infrastructure via routers, switches, and servers through the internet, the network latency can be of a significant amount.
Latency in a VMS system depends on the bandwidth of the network and the amount of video data produced by the IP cameras for transferring (in bitrate). Allowing extra bandwidth capacity to the network can help in accommodating higher bitrate loads through video data traffic. Usage of H.264 codecs can also help in reducing the average bitrate of the data, which can help in reducing the network latency.
More bandwidth in the network and good connection speed (with higher bitrate) can help in accommodating more data from the IP cameras to process through the network, resulting in reduced latency.
At the receivers end, the cloud-based video management system receives compressed video data, which is then unpacked, ordered, decoded and displayed on the system screen (computer, mobile, tab etc.). The latency of video data in the process of decompression and display is dependent on the video resolution, frame rate, software decoders and configuration of the system (processor, RAM, graphics cards, etc.). The monitor device refresh rates and operating systems also play a crucial role in controlling the display latency of the system.
Though it may be difficult to achieve zero latency in a Video Management System, it can definitely be reduced to the minimum by installing efficient sub-systems compatible and rightly placed in an optimized VMS environment structure. Enterprises aspiring for high performing VMS with minimum latency issues must consult with their VMS service providers and systems integrators while selecting hardware and software components for each section of their Video Management System i.e. IP cameras, network, streamers, and display systems.
eInfochips offers micro-services and multi-tenant federated architecture based video management solutions that help address the issues of latency in video feeds by running applications independently and supporting sub-systems of VMS architecture. Our solutions ensure that all the applications of VMS are scalable, compatible, and efficient to meet the real-time as well as offline video surveillance needs. To know more, download the brochure of Snapbricks VMS.