When Having Your Face ‘Recognized’ Is A Treacherous Liability

Facial recognition tech presents an existential threat to the privacy and safety of darker-skinned people and immigrants. How do we protect ourselves?
Over the last few years, facial recognition blocking glasses have gained mainstream popularity, purportedly helping users avoid the types of light rays required for facial scans.
Over the last few years, facial recognition blocking glasses have gained mainstream popularity, purportedly helping users avoid the types of light rays required for facial scans.
Donald Iain Smith via Getty Images

I can’t remember where I was going, which airline I was flying, or even which airport I was in, but I remember the moment I realized I could opt out of the new facial recognition at the TSA. “Participation in TSA facial recognition technology is optional,” read the blue words against the white sign.

Rudimentary facial recognition technology was introduced in the U.S. in the 1960s and has since evolved to an AI-assisted arena where a camera or other device can scan an individual’s face and compare it against tens of billions of images in a database at breakneck speed. Early tech was used by law enforcement to help increase the speed with which agents can help identify people suspected of crimes.

But in America, where racial and gender bias pervade the deepest parts of the nation’s criminal justice and carceral systems, AI-assisted facial recognition presents an existential threat to the privacy and safety of darker-skinned people and immigrants, especially.

So how are we protecting ourselves?

Over the last few years, facial recognition blocking glasses have gained mainstream popularity, purportedly helping users avoid the types of light rays required for facial scans. Human eyes are as unique as a fingerprint, says Steven Lee, an optometrist and director of digital innovation at Zenni Optical. Facial recognition software uses the eyes, along with other points on a face, to make a person’s facial identifiers. In policing, surveillance can be used in areas deemed by law enforcement to be “high crime,” which often include majority Black and Latino communities.

Zenni wearers can see their tech working in real time, Dr. Steven Lee says that the glasses often block a wearer’s ability to unlock their earphones through facial recognition.
Zenni wearers can see their tech working in real time, Dr. Steven Lee says that the glasses often block a wearer’s ability to unlock their earphones through facial recognition.
Courtesy of Zenni Optical

The experts I spoke with offered various solutions to the problem of prejudiced surveillance. I quickly learned how crucial it is for Black and brown tech professionals, the former of whom are underrepresented in the industry, to be involved in all steps of creating these types of technologies. And legislators, who have struggled to pass comprehensive protections, need to ensure that a set of privacy-protecting guardrails is in place. California passed the nation’s first AI safety legislation at the end of September.

“I think now more than ever we need democratic norms,” says Mutale Nkonde, an artificial intelligence policy researcher pursuing her doctorate in digital humanities at Cambridge University. “But now more than ever, we need to be able to respect our communities and then pursue models of public safety that are not based on hyper-surveillance in some places, and then under-surveillance in others.”

In January 2020, before the COVID-19 pandemic upended the globe, and protests against police violence rocked cities in the U.S. and around the world, a Michigan father named Robert Williams was arrested for a theft he didn’t commit after facial recognition software falsely identified him as the culprit. Despite his innocence, Williams spent more than a full day in jail.

The arrest of Williams and two other Detroit-area men was reported by Wired as among the first cases in the nation to result in an arrest based on wrongful facial recognition. “All three men are fathers,” writer Khari Johnson reported, “and all three are Black.”

Williams sued the Detroit police for his wrongful arrest. As a result of the settlement from his case, the department in 2024 was prohibited from using facial recognition as the sole basis for arrests. The American Civil Liberties Union, or ACLU, which represented Williams in the case, called the protections “the nation’s strongest police department policies and practices constraining law enforcement’s use of” facial recognition technology.

The incident still upended Williams’s life. “The scariest part is that what happened to me could have happened to anyone,” he said in a press release, after the settlement. What happened to Williams could still happen to anyone, as surveillance continues to seep into our everyday lives.

It is within this environment that police have increasingly deployed facial recognition technology purportedly designed to help cops identify perpetrators. In New York City, for instance, the New York Police Department received nearly 10,000 requests for facial matching and reports it positively identified more than 2,500 “possible matches.”

An article, posted this past Tuesday from 404 Media, reported officers from Immigration and Customs Enforcement and Customs and Border Protection were caught on video openly using smartphone facial recognition technology to verify people’s citizenship. The technology, 404 Media wrote, was deployed during “stops that seem to have little justification beyond the color of someone’s skin.”

Meka Egwuekwe is a software engineer who now runs a nonprofit teaching coding, AI, and tech to community members in Memphis, Tennessee. He recalled, in a phone call, working on a project for a criminal justice-related customer around 20 years ago and noticing, even then, the limitations of facial recognition in identifying faces.

“There are real communities that are over-policed and over-surveilled,” Egwuekwe says. “So, even if they got the accuracy right, there’s still a lot of policy that’s not right in terms of how it’s regulated and used.”

The criminal justice system doesn’t present the only facial recognition risk to vulnerable populations and people of color. Joy Buolamwini became a premier scholar in the tech privacy space when, while in graduate school, she realized facial recognition software couldn’t recognize her face unless she covered it with a white mask. Buolamwini eventually founded the Algorithmic Justice League, an organization dedicated to addressing prejudicial bias in technology and AI.

Faulty facial recognition has blocked students from taking exams, been used in federal public housing complexes, and during large-scale events like sporting games.

“This isn’t a Black problem,” Nkonde, who founded a nonprofit addressing algorithmic bias called AI for the People, says. “This issue is a society problem. It’s just that Black people are the canaries in the coal mine.” Even as the technology has advanced to more accurately identify darker skin tones, says Nkonde, the privacy dangers it presents remain.

Zenni Optical’s Lee found himself at the intersection of tech and optometry after he invented an online eye exam. Years later, Lee helped create and launch Zenni Optical’s new tech offering: infrared defense glasses meant for all kinds of protection — including from unwanted tracking. The glasses are a new entrant into a burgeoning industry of facial recognition blocking spectacles designed to help shield wearers from the growing ubiquity of feature-identification surveillance.

“It does a lot of protection both in terms of the rays of light from outside,” blue light and near-infrared light needed for facial recognition, Lee tells me. Zenni’s lenses block up to 80% of near-infrared light.

“So think of it almost as a shield on your eyes,” Lee says. “It’s just this added layer of protection.” Starting at under $60, the EyeQLenz glasses are intended, like Zenni’s entire offering, to be accessible to an inclusive swath of consumers.

Zenni Optical joins a small but growing group of accessible glasses makers exploring anti-surveillance technology, even as technological advances are increasing the ability and probability that you’re being recorded and positively identified, yet remain unaware. Reflectacles, launched in 2017, for instance, offers several models of privacy glasses ranging from about $50 for glasses clips to about $230 for the sold-out Ghost Miasma frames. A 2013 Smithsonian magazine article noted on the emergence of this new type of discreet wearable technology, “Camouflage Couture is all the rage.” And researchers cited in a 2016 article on The Verge made glasses that could trick AI into misidentifying their wearer.

Zenni wearers can see their tech working in real-time, Lee says, since the glasses often block a wearer from unlocking their earphones through facial recognition.

And while these tools are now essential, we have to ask ourselves how it’s come to this. If an American wants to conceal themselves, products like these are tools to do that, but Egwuekwe says the glasses are like a bandaid that can stop the bleeding, but don’t address the root causes of an injury. “Fundamentally, it’s this,” he says. “Black and Brown people shouldn’t need special glasses just to walk through their own neighborhoods without fear of being misidentified.”

Close
TRENDING IN Black Voices