Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.
“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, wrote in a blog post on Tuesday. “Its removal will result in the deletion of more than a billion people’s individual facial recognition templates.”
He said the company was trying to weigh the positive use cases for the technology “against growing societal concerns, especially as regulators have yet to provide clear rules.”
Facebook’s about-face follows a busy few weeks. On Thursday it announced its new name Meta for Facebook the company, but not the social network. The company is facing perhaps its biggest public relations crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.
More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system. That’s about 640 million people. But Facebook has recently begun scaling back its use of facial recognition after introducing it more than a decade ago.
The company in 2019 ended its practice of using face recognition software to identify users’ friends in uploaded photos and automatically suggesting they “tag” them. Facebook was sued in Illinois over the tag suggestion feature.
The decision “is a good example of trying to make product decisions that are good for the user and the company,” said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of regulatory pressure, since the face recognition system has been the subject of harsh criticism for over a decade.
Researchers and privacy activists have spent years raising questions about the technology, citing studies that found it worked unevenly across boundaries of race, gender or age.
The problem with face recognition is that in order to use it, companies have had to create unique faceprints of huge numbers of people – often without their consent and in ways that can be used to fuel systems that track people, said Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.
“This is a tremendously significant recognition that this technology is inherently dangerous,” he said.
At least seven states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy. Debate over additional bans, limits and reporting requirements has been underway in about 20 state capitals this legislative session, according to data compiled by the Electronic Privacy Information Center in May of this year.
Meta’s newly wary approach to facial recognition follows decisions by other U.S. tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader U.S. reckoning over policing and racial injustice.
President Joe Biden’s science and technology office in October launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character.
European regulators and lawmakers have also taken steps toward blocking law enforcement from scanning facial features in public spaces, as part of broader efforts to regulate the riskiest applications of artificial intelligence.
Facebook’s face-scanning practices also contributed to the $5 billion fine and privacy restrictions imposed by the Federal Trade Commission in 2019. Facebook’s settlement with the FTC after the agency’s yearlong investigation included a promise to require “clear and conspicuous” notice before people’s photos and videos were subjected to facial recognition technology.