Facebook disputes report its AI has little effect on hate speech


Sarah Tew/CNET

Facebook on Sunday responded to a news report that its artificial intelligence program has had little effect at curbing and removing violent content from the social network. The Wall Street Journal cited internal documents from 2019 in its reporting that the social network’s engineers estimated the company’s algorithms remove only a small fraction of problematic content that violate rules.

“The problem is that we do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas,” a senior engineer and research scientist wrote in a mid-2019 note, according to the Journal.

The company has been under more scrutiny to do a better job of moderating content especially after the Jan. 6 riot on Capitol Hill, which underscored how online hate can spill into the real world.

But Facebook contends that prevalence of hate content on the platform has declined nearly 50% in the past three quarters to about 0.05% of content view, or about 5 out of every 10,000 views.

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress,” Facebook Vice President of Integrity Guy Rosen wrote in a blog post on Sunday. “This is not true.

“We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it,” Rosen wrote.

The company has been spending more time in the weeks after Frances Haugen, a former Facebook employee-turned-whistleblower, disclosed thousands of documents and internal communications that showed Facebook was aware of the dangers of its products but downplayed these effects publicly. Lawmakers across the political spectrum have so far responded with renewed interest in holding Facebook to account.

Haugen appeared before a US Senate subcommittee earlier this month and alleged that Facebook’s products “harm children, stoke division and weaken our democracy.” Facebook CEO Mark Zuckerberg criticized Haugen’s testimony, saying it presented a “false picture” of the social network.

Source link


Why didn’t Amy Winehouse biopic makers ask us about her, say friends

'How can it be authentic if they don't know the real...

Museums Use Technology to Stir Interest in the Artistic Past

This article is part of our latest Fine Arts & Exhibits special report, about how art institutions are helping audiences discover new options...

Why Alan Sugar’s Apprentice contestants are £150,000 out of pocket… despite a £250,000 prize

Why Alan Sugar's Apprentice contestants are £150,000 out of pocket... despite...

Razzies cofounder says anger over nomination of Firestarter’s Ryan Kiera Armstrong, 12, ‘overblown’

Razzies cofounder says anger over nomination of Firestarter's Ryan Kiera Armstrong,...