Guided Weapon Target Tracking system is the boon to the Industrial Security Vulnerabilities
Kailash Chaudhary GRESB-AP, MRICS, PMP?
Climate Champion ?? I Philomath I Proud Veteran I Personal views shared; not reflective of company stance
Guided Weapon Target Surveillance system is the boon to the Industrial Security Vulnerabilities
Today’s security market offers a wide array of highly advanced sensors and systems, all designed to help protect perimeters, monitor assets, and safeguard lives. The reason for the many choices stems from the fact that different sensors have their sweet spot in terms of where they perform best – whether that is a certain environment, a type of intrusion or even expected performance based on budgetary requirements.
Another approach to the problem of expanding the “sweet spot” is to combine the capabilities of current sensors, allowing them to work together as more intelligent solution, with all the advantages they possessed as individual entities, but with a vast amount of new capabilities gained from their union with a complementary technology. One such Collaboration is the integration of perimeter radar withVideo Analytics.
Stand Alone Sensors:
As individual sensors, radars and video analytics are both excellent perimeter detection sensors. Single radar boasts a very large coverage range and performs in all types of weather conditions. Similarly, video analytics is highly prised for its ability to discriminate targets, understanding very specific details and behaviour through the use of both fixed and movable (PTZ) cameras.
Despite their advanced surveillance capability, no sensor is perfect. Video analytics requires a view of the scene, so extreme weather can prove challenging. Likewise, as detection ranges become longer, the effective coverage area becomes smaller, due to the reduced field of view of a zoomed in camera. Likewise, radars suffer from the fact that they cannot always accurately identify the target. For each radar detection, a user must obtain visual confirmation before taking action. For some installations, they can also be prone to reflections and ground clutter.
The magic happens when these sensors are integrated into a collaborative system. In addition to each covering the other’s challenges, it also introduces a wealth of new features and increased capability.
Integration:
Before addressing how these sensors perform as a team, it’s prudent to first address how the combination is achieved. The actual integration of these two systems is not complex in terms of how they share and coordinate data. Radars communicate using a well-defined interface standard, with position and target information usually provided to the analytics engine via XML or similar standards based interface. Radars identify position as a “range and bearing.” The key to the integration is the use of video analytics. In simple terms, this form of intelligent video not only understands the nuances of the video pixel information, but it also understands where each pixel is located in 3D map space. Converting the radar’s range and bearing information into longitude, latitude and elevation coordinates allows the two systems to easily share and collaborate target information.
The ability of each sensor to cover the challenges experienced by the other is in itself, an extreme value-addition to any critical facility looking for improved surveillance capability. However, the integration between a radar and video analytics goes well beyond this capability, providing increased situational awareness, more precise detection/target identification, reduced nuisance alarm rates, and the capability for a complete automated response.
Situational Awareness:
The key to any surveillance system is its ability to accurately and quickly communicate the details of an event to the security operator. When using multiple systems, such as radar, GPS, radar and even a fence sensor, when the sensor outputs are not integrated, a single event may result in many target tracks and object icons on the user interface. This can be due to many factors, including differences in update rates, sensor accuracy or a variety of conclusions on the type of target detected. One feature of the collaboration of radar and intelligent video is the ability to merge these tracks. So instead of seeing multiple tracks and multiple detection icons, all resulting from the same target, the solution can merge the target information into a single icon and a single combined track. The result is reduced clutter, which allows the operator to more quickly understand the situation and take appropriate actions.
Detection / Target Identification:
Another key benefit from the integration of radar and intelligent video it the idea of alarm collaboration. Both sensors have the ability to detect and identify various characteristics of a potential target. However, as a combined sensor enabled with the ability to evaluate all the data collected from both technologies, the resulting target detection and identification becomes extremely accurate and provides the user with a robust set of data related to the target. This reduces a wide range of potential nuisance alarms and environmental alarms, which either sensor on its own may have had issues with identifying. This feature also provides for more precise alarm conditions than are typically achievable with only a single sensor type. This can include very specific conditions around location, type of target, location relative to other targets or assets, as well as, target behaviour. The vast amount of target data collected from the combined sensors can be provided to the user as a combined data set, reducing the need to search for this data across several systems, or having to manually confirm items.
Automated Response:
Perhaps one of the most exciting aspects of combining these sensors is their ability to automate many of the first response actions. Doing so, provides reaction time to security officials to react to the situation and not have the added responsibility to manually maintain surveillance of the intrusion. A powerful feature which enables this force multiplier is slew to cue or slew to Radar. When a new radar target appears, this information is communicated to the video system, which in turn selects the most appropriate camera, or cameras, and automatically steers them to the point of intrusion and centres the target in the camera view. With the target under automatic surveillance, command and control software can cause the sensors to collaborate on the validity of the target, and automatically provide the information to the operator for visual confirmation. Once a target is confirmed, the user interface provides the ability to change the map-based icon to reflect the confirmed nature of the target from “unknown” to “friend” or “foe.” Through continued coordination, the system will then remember this target identification as long as the track persists.
Slew to target further extends to more advanced automation, including camera auto follow, radar follow and camera hands off. Once a camera has been cued to a location by either a video detection or radar detection, video analytics has the capability to lock onto the target and automatically follow the target by continuously adjusting the pan, tilt and zoom functionality of the camera, keeping the target centred in the camera’s field of view. Should the target leave the coverage area of a camera, the radar can now steer the camera using its position data to maintain a continued view of the target. Working together, the sensors can then determine when the target’s path enters into an area covered by another camera, at which time they can issue a slew to target command and allow the new camera to begin the auto follow duties. During these types of scenarios, the collaboration also boasts the ability to provide a visual record of the sensor’s actions on a map-based user interface.