Facebook Says AI Getting Better at Spying Unwanted Content

Facebook Says AI Getting Better at Spying Unwanted Content

Facebook on Wednesday said that its software is getting more skilled at spying banned content at the social network, then working with humans to quickly remove terrorist videos and more.

“While we err on the side of free expression, we generally draw the line at anything that could result in real harm,” Facebook chief executive Mark Zuckerberg said during a briefing on the company’s latest report on ferreting out posts that violate its policies.

“This is a tiny fraction of the content on Facebook and Instagram, and we remove much of it before anyone sees it.”

  • Facebook Removes 3.2 Billion Fake Accounts, Millions of Child Abuse Posts

Facebook has been investing heavily in artificial intelligence (AI) to automatically spot banned content, often before it is seen by users, and human teams of reviewers who check whether the software was on target.

Facebook has more than 35,000 people working on safety and security, and spends billions of dollars annually on that mission, according to Zuckerberg.

“Our efforts are paying off,” Zuckerberg said. “Systems we built for addressing these issues are more advanced.”

When it comes to detecting hate speech, Facebook software now automatically finds 80 percent of the content removed in a massive improvement from two years ago, when nearly all such material was not dealt with until being reported by users, according to the California-based firm.

Nettling nuance
Zuckerberg noted that hate speech is tougher for AI to detect than nudity in images or video because of “linguistic nuances” that require context that could make even common words menacing.

Add to that videos of attacks driven by bias against a race, gender or religion could be shared to condemn such violence rather than glorify it.

People at Facebook continue to try to share video of a horrific mosque attacks in Christchurch, New Zealand, with social network systems blocking 95 percent of those attempts, according to executives.

A lone gunman opened fire on two mosques in the city of Christchurch killing and wounding scores of Muslims in March, broadcasting the assaults live on Facebook.

Facebook has terrorism experts as part of a team of more than 350 people devoted to preventing terrorist groups from using the social network, according to head of global policy management Monika Bickert.