Video latency in C2/C4ISR: The effect on decision-making
What is latency and what effect will it have on my C2/C4ISR environment?
Latency is the delay from input into a system to desired outcome; the term is understood slightly differently in various contexts and latency issues also vary from one system to another.
Latency greatly affects how usable and enjoyable electronic and mechanical devices as well as communications are.
Latency in communication is demonstrated in live transmissions from various points on the earth as the communication hops between a ground transmitter and a satellite and from a satellite to a receiver each take time. People connecting from distances to these live events can be seen to have to wait for responses. This latency is the wait time introduced by the signal travelling the geographical distance as well as over the various pieces of communications equipment.
Even fiber optics are limited by more than just the speed of light, as the refractive index of the cable and all repeaters or amplifiers along their length introduce delays.
Types of latency
Network latency is an expression of how much time it takes for a packet of data to get from one designated point to another. In some environments, latency is measured by sending a packet that is returned to the sender; the round-trip time is considered the latency. Ideally latency is as close to zero as possible.
The contributors to network latency include:
- Propagation: This is simply the time it takes for a packet to travel between one place and another at the speed of light.
- Transmission: The medium itself (whether optical fiber, wireless, or some other) introduces some delay, which varies from one medium to another. The size of the packet introduces delay in a round trip since a larger packet will take longer to receive and return than a short one. Also, when signals must be boosted by a repeater, this too introduces additional latency.
- Router and other processing: Each gateway node takes time to examine and possibly change the header in a packet (for example, changing the hop count in the time-to-live field).
- Other computer and storage delays: Within networks at each end of the journey, a packet may be subject to storage and hard disk access delays at intermediate devices such as switches and bridges. (In backbone statistics, however, this kind of latency is probably not considered.)
Internet latency is just a special case of network latency - the Internet is a very large wide-area network (WAN). The same factors as above determine latency on the Internet. However, distances in the transmission medium, the number of hops over equipment and servers are all greater than for smaller networks. Internet latency measurement would generally start at the exit of a network and end on the return of the requested data from an Internet resource.
WAN latency itself can be an important factor in determining Internet latency. A WAN that is busy directing other traffic will produce a delay whether a resource is being requested from a server on the LAN, another computer on that network or elsewhere on the Internet. LAN users will also experience delay when the WAN is busy. In either of these examples the delay would still exist even if the rest of the hops --including the server where the desired data was located -- were entirely free of traffic congestion.
Audio latency is the delay between sound being created and heard. In sound created in the physical world, this delay is determined by the speed of sound, which varies slightly depending on the medium the sound wave travels through. Sound travels faster in denser mediums: It travels faster through solids, less quickly through liquids and slowest through air. We generally refer to the speed of sound as measured in dry air at room temperature, which is 796 miles-per-hour.
In electronics, audio latency is the cumulative delay from audio input to audio output. How long this delay is depends on the hardware and even software used, such as the operating system and drivers used in computer audio. Latencies of 30 milliseconds are generally noticed by an individual as a separate production and arrival of sound to the ear.
Operational latency can be defined as the sum time of operations, when performed in linear workflows. In parallel workflows, the latency is determined by the slowest operation performed by a single task worker.
Mechanical latency is the delay from input into a mechanical system or device to the desired output. This delay is determined by Newtonian physics-based limits of the mechanism (excepting quantum mechanics). An example would be the delay in time to shift a gear from the time the shift lever of a gear box or bicycle shifter was actuated.
Computer and operating system latency is the combined delay between an input or command and the desired output. In a computer system, latency is often used to mean any delay or waiting that increases real or perceived response time beyond what is desired. Specific contributors to computer latency include mismatches in data speed between the microprocessor and input/output devices, inadequate data buffers and the performance of the hardware involved, as well as its drivers. The processing load of the computer can also add significant latency.
From the user's perspective, latency issues are usually a perceived lag between an action and a response to it. In 3D VR simulation, for example, in using a helmet that provides stereoscopic vision and head tracking, latency is the time between the computer’s detection of head motion to the time it displays motion in the image.
For training and simulation, low latency is critical. Control is difficult with significant latency as the user is lagging behind the real-time events in the exercise, due to delays in the information getting to their computer.
Latency issues are noticeable for an individual, generally increasing user annoyance and impacting productivity as the level increases above 30ms.
The severity of the effect varies from one application to another, as do mitigating tactics. In communications, delays can be a result of heavy traffic, hardware problems, incorrect set up and/or configuration.
Latency testing: Latency testing can vary from application to application. In some applications, measuring latency requires special and complex equipment or knowledge of special computer commands and programs; in other cases, latency can be measured with a stop watch.
In networking, an estimated latency to equipment or servers can be determined by running a ping command; information about latency through all the hops can be gathered with a trace route command. High-speed cameras might be used to capture the minute differences in response times for input to various mechanical and electronic systems.
Reducing latency: Reducing latency is a function of tuning, tweaking and upgrading both computer hardware and software and mechanical systems. Within a computer, latency can be removed or hidden by such techniques as prefetching (anticipating the need for data input requests) and multithreading or by using parallelism across multiple execution threads.
Within the C2 and C4ISR environment, the user should carefully consider the detrimental effect on Operator efficiency created by latent imagery and audio. For this reason (in addition to the benefits of increased security), most control-room designs are moving to secure KVM (keyboard-video-mouse), signal-extension and Switching rather than more traditional IP-based networks.