Rethinking Pre-Check
[from the archives 2/17/2012] by Joseph F. Corrao
After researching a pilot program called “Registered Traveler” (RT), for about five years, on July 30, 2008, the Transportation Security Administration (TSA) published its conclusions in the Federal Register, the U.S. Government's official publication of record for all things regulatory. TSA stated three key conclusions:
“First, current technology is insufficient to allow anyone, even travelers who provide biographic and biometric information and undergo a TSA security threat assessment, to bypass the minimum screening procedures at airport security checkpoints....
“Second, TSA concluded that an individual's successful completion of a TSA threat assessment did not eliminate the possibility that the individual might initiate an action that threatens the lives of other passengers. Therefore, screening of these individuals should remain the same as screening of other passengers.
“Third, while effective identity verification is a critically important element in a multilayered approach to aviation security, RT is not a standalone security program.”
The RT pilot program was a partnership between the Federal government and the private sector, developed in large part to placate Congress which, back in 2001, immediately following the horrific events of September 11, created the TSA to tighten up aviation security screening and simultaneously encouraged TSA to, “[e]stablish requirements to implement trusted passenger programs and use available technologies to expedite the security screening of passengers who participate in such programs.”
TSA's 2008 conclusions amounted to a judgment that RT – or trusted passenger programs more generally – could not substitute for actual screening of passengers as a valid security measure. This result did not set well with certain members of congress, who hounded TSA incessantly in hearings, official correspondence and the press, for ways to expedite the screening that congress had ordered TSA to make more rigorous.
Around 2010, TSA found a way: TSA Pre-Check, and they're so proud of it, they trademarked the name, which actually consists of the words “TSA Pre-” plus the mark generally known as “check.” Call it “PC” for short.
Where RT was a private-public partnership, PC is a wholly government-run operation. Under RT, private vendors advertised the program, enrolled customers by collecting their personal information and fingerprints, issued ID cards to customers who passed a background check called a “security assessment,” installed card readers at participating airports, and supplied personnel to expedite the customers' passage through security to the airport gate waiting area. Under PC, TSA does all this itself. In all ways pertinent to this article, PC is nothing more than RT with far less private sector participation.
And if TSA's 2008 conclusions were valid with respect to RT, they are no less valid with respect to PC.
History, logic, and statistical theory all support TSA's 2008 conclusions. Historically, we have seen terror plots mounted by persons who had no derogatory information in their backgrounds prior to committing their acts of terrorism. For example, in 2007, British Intelligence discovered a plot, mounted by a cell composed of medical doctors, to detonate car bombs around London. According to published reports, the plot came to light when a rigged Mercedes sedan failed to detonate – not because of any background check or identity verification. As The Washington Post reported at the time, “terrorism experts said the suspects' profession is not a surprise -- many top al-Qaeda operatives, they noted, have advanced education.”
Operatives like the London doctors cannot be caught by background checks because they have excellent backgrounds; they cannot be caught by identity checks because they use their own names. They can be caught by intelligently designed screening, because even people with excellent backgrounds and real names cannot do terror-level harm without the tools – the explosives, the weapons, the gear – that intelligently designed screening can detect.
Logic suggests that there are many terrorist operatives and potential operatives who do not yet have derogatory information in their backgrounds. Such operatives may be recent recruits, long-term “sleeper” operatives groomed to fit in, dupes who do not understand what they are doing for one reason or another, and future recruits – those who have not yet signed up for the cause, but will. Such persons can present themselves for any combination of identity verification and background check that can be devised – and they will pass it. Until they make a critical mistake, such operatives may be undetectable. But intelligently designed screening can detect the tools such operatives must use to accomplish terror-level harm.
TSA defends PC by asserting that any lessening of screening is offset by other “layers of security,” such as behavior detection officers (BDOs) or bomb-detection canines. This defense makes sense only if two conditions are met: (1) BDOs and bomb-sniffing dogs are used only when PC is used, so they make up for whatever security weaknesses PC may introduce, and (2) BDOs and bomb-sniffing dogs actually work. Neither condition is met in reality. In reality, BDOs and bomb-sniffing dogs are deployed to U.S. airports regardless of whether PC is in use at that airport. This means that the introduction of PC at an airport represents a reduction in security. Also in reality, TSA's BDO program has been roundly criticized as ineffective by watchdogs both within and outside government; it is said that the effectiveness of the BDO program in detecting terrorists is entirely unproven and approximates random chance. Canines seem to be far more effective, but even canine noses fatigue, some rather quickly.
Finally, statistical theory tells us that, for any given system, the total error sum is fixed. That is to say, for every system, a certain statistical probability to make mistakes exists; this total error probability can be changed only by changing the system. Without changing the system, the error probability – and the corresponding observed error rate – can be pushed around, but it can't be reduced.
What is “pushing” the error? Errors come in two flavors, Type 1 or “false positive” error, and Type II or “false-negative” error. A “false positive” error is the mistake that's made when TSA treats as a terrorist someone who is not a terrorist. A “false negative” error is the mistake that's made when TSA treats as “not a terrorist” someone who is a terrorist. Statistical theory tells us that, within any given system, we can dial the error from false positive to false negative or back – exactly like turning the gain down or up on a metal detector. The total amount of error stays the same; all we can do without changing the system in some meaningful way, is dial the detector up to catch all the metal (and falsely catch some things that are not metal), or dial it down to avoid catching anything that is not metal (and, in the process, fail to catch some metal). There is no sweet spot where all the metal, and only the metal, is detected, unless the system is perfect. In the real world, no system is perfect; error exists and, where it exists, it can only be pushed around, it cannot be reduced without changing the system.
PC does not change the aviation security system. There are no levels of security that are deployed only to back-stop PC; if there were, it would mean that effective security measures were not being deployed to some airports only because PC had not been deployed there. That would be an untenable, even an irresponsible, situation. Where PC is deployed, it merely pushes the existing error rate from false positive to false negative. Logic suggests that the error rate is considerable: expedited screening is valuable precisely because TSA treats so many non-terrorists as if they were terrorists that something has to be done. If the error rate is considerable, then pushing it into false negative means that PC must be opening considerable opportunities for terrorists to pass through undetected.
If the PC security picture is not entirely gloomy, it is because of one final irony. Certain TSA security measures – forcing travelers to remove their shoes and outerwear and belts and take their laptops and liquids out of their carry-on cases – have been criticized as “security theater” that seem more useful than they really are. Under PC, TSA expedites screening by letting Pre-Check participants leave their shoes and light outerwear and belts on, and leave laptops and right-sized liquids in carry-on bags. If the critics are right, then perhaps PC does little harm, since the security measures PC enables travelers to avoid do little good.
Still, on the basis of prudent analysis, it seems inevitable that demands will rise to rethink PC sooner or later – sooner being before a terrorist exploits the PC path of least resistance, and later being immediately thereafter.