Surveillance is all around us today. Whilst this industry is growing, so is the amount of data that needs processing.
As most surveillance is made up of video, it is getting more and more complex to be able to jump to event structure inside video.
Security systems technology such as motion activation has made the filtering process a little easier as cameras record only when motion is detected. However, for video feeds that cover busy areas such as streets and airports, this is not possible.
Facial recognition technologies and object detection technologies further filter event structure in video. However, there is no current way to compile events into one video playlist without massive human resource and effort.
In 2009, the US federal government gave state and local administrations US$300 million to fund ‘an ever-growing array of cameras’. In the United Kingdom alone there is an estimated one CCTV camera per 14 citizens; and you can’t travel around any big city without being watched almost in every street and every public transportation line.
So how does one proactively react to events happening live as opposed to analysing events, alerting and producing these events for playback after the event?
It is not possible using today’s technologies where video content is rendered.
The 9/11 terrorist attacks triggered US federal spending on homeland security exceeding US$790 billion. Since then, US investment has helped fuel the growth of a global video surveillance industry.
According to a 2011 report by Electronics.ca Publications, a market research firm, the video surveillance market was slated to grow from US$11.5 billion in 2008 to US$37.5 billion in 2015.
In the New York subway system alone, there are now 3700 security cameras online. 3700 cameras create a network that you can’t escape unless you wear a balaclava.
Of those, a remarkable 507 are ‘providing live feeds to NYPD’s Command Centre from three key transit hubs: Grand Central Station, Penn Station, and Times Square.’ And that number is growing.
The Holy Grail of surveillance systems are to be proactive as opposed to reactive (post event), and whilst technologies exist in the detection of number plates and faces, there is no system that will assemble the content securely to deliver to a field operative, security, police, or first response unit so they can act on that information immediately.
The White Van Scenario
A white van travels around an embassy three times in a row triggering an alert (already existing technology).
That alert is processed by technology looking an infield information. Using computer modelling, the cameras around the perimeter are instructed to track and focus in on the object, detecting licence plate. An alert is sent to a human resource who cues the footage and starts analysis on this elevated flag. That is if a resource is available.
A resource or team may call for satellite content and content from different published feeds (social content) to evaluate and track risk.
The Boston bombings proved this is the case. Once the video is analysed, it needs to be packaged and delivered to many different authorities for actioning.
Industry insiders estimate that of the 100% of satellite, security and video footage captured, only 1% is analysed by a human resource. There is simply too much information.
The Linius Impact
In the example above, Linius would sit in between the surveillance detection technologies (facial, movement, heat) and the resource.
Linius would be sent those alerts from connected systems and collate frame-based events out of each camera. It would package these events immediately after they have happened so as to greatly increase the time available to make decisions on threats.
This would also remove the need for resources to try to identify video from each feed on each camera and then make a decision.
Depending on the complexity and number of feeds of video, this could potentially change the time to action a threat from hours or days to minutes and therefore could potentially shift dealing with a threat from post event to at event.