Field of View

Back at my old job at the LAPD, being the "video guy" I would often be asked to evaluate a new camera or piece of image/video processing software. I attended many meetings with vendors pitching all manor of gear to management.

One of the first questions that I'd ask is "what is the tool's purpose?" For a camera / lens, "what it is designed to capture / at what distance / level of quality?" Knowing what the tool was designed for, I could set up a series of tests to see if the tool was fit for purpose - and accurately report back the results to the chain of command.

For example, in testing a system designed to capture identification quality video of people and license plates at a distance of 250' in challenging light conditions, my focus was on the appropriate lens and encoding technology. I understood that at that distance, the field of view would be narrow, but that it was necessary to restrict the view in order to get enough pixels on target. I also wanted an encoding scheme that was tolerant of motion.

These types of tests are easy. They had a quantitative outcome - something I could measure. The modern test / validate schemes now have to account for qualitative factors and social outcomes that aren't easy to quantify. How do you test a piece of equipment for a goal of "transparency" or "complain reduction?" If it's a camera - it's actually easy.

Every camera system is designed mimic the human vision system.

Our vision and perception system can "see" up to 576 megapixels, but only 7 megapixels are actively processed at any given time. [source]

Combined with the quantum field generator between our ears, our eyes are constantly scanning the scene with our brains processing the incoming information.

Our systems can reach a field of view between 100o - 120o. If I'm designing a system to mimic the human vision system, I'd choose a lens to deliver this field of view.

Which gets us back to the purpose of the system. If the stated purpose is to mimic the human vision system, then again, you'd choose a +/- 120o field of view. But, if the stated purpose of the system is "transparency" or "complaint reduction," as is often the case, then a much larger field of view is desired.

Given that you only get one chance to record an incident, wouldn't you want the most information recorded as possible? Expanding beyond 120o allows you to capture more of the scene. This will allow you to capture more of "what happened" around the periphery of the incident. If, for some reason, the camera wasn't quite pointed in the correct direction, having a wider field of view will increase your chance of capturing the incident.

The main complaint about the increase in field of view revolves around the potential for optical distortion around the edges of the recording. That being said, I'd rather have the pixels than not - I can always rectify the image in FIVE to answer any questions that may arise from the incident.

Within Axon Five, there are ways to address this sort of optical distortion - Correct Fisheye and Undistort.

The Correct Fisheye tool in Axon Five corrects the distortion caused by the most common types of fisheye lenses. If configured correctly, it can properly compensate even strong distortions within the entire image.* 

The Undistort tool in Axon Five corrects the geometric distortion caused by capturing devices' optics. Although distortion can be irregular or follow many patterns, the most commonly encountered distortions are radially symmetric, or approximately so, arising from the symmetry of a photographic lens. In practice, distortion causes straight lines to appear curved in the image: this effect grows when going away from the center of the image.* 

Lines that should be straight in real life appear curved (above). This is quickly, easily, and accurately fixed with Axon Five (below).

Again, I'd rather have the information and not need it, than need it and not have it. In terms of field of view - when the goal is transparency and/or a reduction of complaints - the wider field of view the better. We can always rectify the view later.

If you're interested in learning more about our leading-edge tools or training opportunities, contact us today.

The Axon Forensic Suite tools are powered by Amped Software technologies.

*References:

Anil. K. Jain, “Fundamentals of Digital Image Processing”, Prentice Hall, pp. 320–322, 1989. ISBN: 0-13-336165-9.

Bernd J?hne, “Digital Image Processing”, 6th revised and extended edition, Springer-Verlag Berlin Heidelberg 2005. ISBN: 3-540-24035-7.

D. Schneider, E. Schwalbe and H.-G. Maas, “Validation of geometric models for fisheye lenses”, in ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 64, No. 3, pp. 259–266, May 2009. https://dx.doi.org/10.1016/j.isprsjprs.2009.01.001

Duane C. Brown, “Decentering distortion of lenses”, in Photogrammetric Engineering, Vol. 32, No. 3, pp. 444–462, May 1966.

Hsieh Hou and H. Andrews, “Cubic splines for image interpolation and digital filtering”, in IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 26, No. 6, pp. 508–517, December 1978. https://dx.doi.org/10.1109/TASSP.1978.1163154

Kenro Miyamoto, “Fish Eye Lens”, in Journal of the Optical Society of America, Vol. 54, No. 8, pp. 1060–1061, 1964. https://dx.doi.org/10.1364/JOSA.54.001060

要查看或添加评论,请登录

社区洞察

其他会员也浏览了