The Corner

Science & Tech

Some Questions about the TSA’s New Facial-Recognition Program

A Transportation Security Administration (TSA) official checks a bag at Terminal 4 of JFK airport in New York City, May 17, 2017. (Joe Penney/Reuters)

The Transportation Security Administration (TSA) was established in the wake of 9/11 as part of the bipartisan Aviation and Transportation Security Act. Its purpose: to “prevent similar attacks in the future.”

The TSA has faced many criticisms since then. Some have described the rigorous screening process each passenger goes through before boarding a flight as “security theater” — something that might make passengers feel safe but isn’t actually effective at preventing threats. These criticisms are not unfounded. The TSA has failed multiple undercover Department of Homeland Security tests in the past ten years designed to test the agency’s efficacy in preventing contraband from making it past security. One 2017 investigation found that the failure rate of undercover tests was “in the ballpark” of 80 percent. While this was an improvement from a failure rate of 95 percent in 2015, the results were still abysmal. It wasn’t the TSA but rather the FBI and MI5 that stopped the two biggest travel-related terrorism events of the past 20 years.

Now, the TSA has begun implementing a new facial-recognition program in more than a dozen airports in the U.S. and Puerto Rico, including in Denver, Baltimore, D.C. (Reagan), Atlanta, Boston, Dallas, and Detroit. You may have already experienced it. Here’s how it works: Passengers approach TSA agents and are asked to scan their passports and stand in front of a camera. The camera checks the passport and their identity, and then passengers can move forward. Small signs indicate that one can opt out of the program, but very few people know that. In my experience, no instructions on how to opt out are posted. I was able to opt out multiple times in the Denver and Reagan airports by telling the security agent I wanted the camera off. I received a few odd looks and extra scrutiny of my ID by one agent, but other than that, it was fairly easy. According to the TSA, the program is meant to provide “improved accuracy and speed of identity verification, while making the passenger experience faster and more seamless.”

The TSA has rolled this program out quietly. But it has not gone entirely unnoticed. One organization, the Algorithmic Justice League (AJL), which advocates “equitable and accountable AI,” is seeking submissions on travelers’ experiences with the new program. Among AJL’s concerns are racial and gender bias in the technology, data security, and lack of transparency surrounding the opt-out process. Some passengers have had issues opting out at the airport. Eventually, the biometrics program will not be optional at all.

Questions arise. Is this program really faster than the system already in place? Is there really evidence that facial-recognition software is an improvement? If not, how might failures in the recognition software be mitigated over time? Will this program help anti-terrorism efforts? As of now, these questions have unsatisfying answers.

Moreover, many are concerned about the privacy-rights implications and the downplaying of the program’s current opt-out nature. TSA administrator David Pekoske’s promise of saving “a couple seconds if not a minute” in the airport is little comfort in the face of a program reminiscent of certain familiar surveillance states. Those of us with valid concerns about the government’s use of powerful technology that can be used to surveil and infringe upon our privacy deserve a hearing.

Exit mobile version