
Facebook users will no longer be able to use its Face Recognition system.
Sarah Tew/CNET
Facebook will shut down its facial recognition system this month and delete the face scan data of more than 1 billion users, the company said Tuesday. It cited societal concerns and regulatory uncertainty about facial recognition technology as the reasons.
More than one-third of the app’s daily active users have opted into its Face Recognition setting, the social network noted in a blog post.
“There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use,” wrote Jerome Pesenti, vice president of artificial intelligence at Facebook’s newly named parent company, Meta. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”
Facial recognition technology, which converts face scans into identifiable data, has become a growing privacy and civil rights concern. The technology is prone to mistakes involving people of color. In one study, 28 members of Congress, roughly 40% of whom were people of color, were incorrectly matched with arrest mugshots in a screen as part of a test that the American Civil Liberties Union conducted using technology made by Amazon.
In the absence of federal regulations, cities and states have begun banning facial recognition systems used by police and government. In 2019, San Francisco was the first city to ban government use of the technology. Others, including Jackson, Mississippi; Portland, Oregon; and Boston, Cambridge and Springfield, Massachusetts, have followed. Over the summer, Maine enacted one of the most stringent bans on the technology.
Earlier this year, a judge approved a $650 million settlement in a class action lawsuit involving Facebook’s use of facial recognition technology in its photo-tagging feature. The feature generates suggested tags by using scans of previously uploaded photos to match people in newly uploaded shots. The lawsuit alleged the scans were created without user consent and violated Illinois’ Biometric Information Privacy Act, which regulates facial recognition, fingerprinting and other biometric technologies.