Saturday, 19 January 2019
Latest news
Main » Facebook Deletes 500 Million Fake Accounts In Effort To Clean Up Network

Facebook Deletes 500 Million Fake Accounts In Effort To Clean Up Network

16 May 2018

In Facebook's first quarterly Community Standards Enforcement Report, the company said most of its moderation activity was waged against fake accounts and spam posts-with 837 million spam posts and 583 million fake accounts being acted upon. For every 10,000 views of content on Facebook, the company said, roughly 8 of them were removed for featuring sex or nudity in the first quarter, up from 7 views at the end of previous year.

The report, released Tuesday, revealed how much content has been removed for violating standards.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report.

Now, Facebook is pulling back the curtain on its efforts.

Facebook says AI has played an increasing role in flagging this content.

It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.

The company previously enforced community standards by having users report violations and trained staff then deal with them. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important".

The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook's more than two billion users from January through March. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".

David Moyes 'likely to leave West Ham United'
His current agreement ends in summer 2018. "I'm told it's "touch and go" whether he stays at West Ham". He has been in China since 2016.

And backing up the company's AI tools are thousands of human reviewers who manually pore over flagged content, trying to determine if it violates Facebook's community standards. For the most part, it has not provided more details on the hiring plan, including how many will be full-time Facebook employees and how many will be contractors.

The number of pieces of content depicting graphic violence that Facebook took action on during the first quarter of this year was up 183% on the previous quarter. During Q1, Facebook found and flagged 85.6% of such content it took action on before users reported it, up from 71.6% in Q4.

The company also removed 21 million pieces of content that contained adult nudity or sexual activity, flagging nearly 96 per cent of the content with its own systems.

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase.

"It may take a human to understand and accurately interpret nuances like... self-referential comments or sarcasm", the report said, noting that Facebook aims to "protect and respect both expression and personal safety".

However, Facebook's ability to find this hate speech before users had reported it was not as good as other categories, with the company picking up only 38%. The renewed attempt at transparency is a nice start for a company that has come under fire for allowing its social network to host all kinds of offensive content.

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. The post said Facebook found nearly all of that content before anyone had reported it, and that removing fake accounts is the key to combating that type of content.

Facebook Deletes 500 Million Fake Accounts In Effort To Clean Up Network