Facebook CEO Mark Zuckerberg wants to hire thousands of content reviewers to pore over flagged content this year, a move he reiterated during his hearings last week on Capitol Hill.
“By the end of this year, we’re going to have more than 20,000 people at the company working on security and content review, because this is important,” explained Zuckerberg during the hearings, adding that when content gets flagged by Facebook (FB), content reviewers will examine that content and take it down if it violates the social network’s Community Standards.
Facebook is already well on its way to meeting Zuckerberg’s ambitious goal, having hired 15,000 people already — 7,500 of which are content reviewers, the social network told Yahoo Finance. Those content reviewers are tasked with reviewing content: text, photos and videos flagged by Facebook users for being pornographic, racist or violent. The work can be fast and furious: a Wall Street Journal report from December indicated that content reviewers may view up to 8,000 posts a day.
According to a Facebook spokesperson, content reviewers are a mix of full-time employees, and workers supplied by recruiting firms such as Accenture, Pro Unlimited and Arvato.
Accenture, Pro Unlimited and Arvato did not respond for comment immediately.
With each hire, Facebook seeks native language speakers with “market knowledge” — an understanding of local and regional issues, such as who the political figures are in a given area. While flagging nudity and pornography may not require it, context is key to identifying say, instances of hate speech that tie into local issues in a certain country or region.
Facebook does not disclose specifically how much they pay content reviewers, however, the social network told Yahoo Finance they are paid “above average” for the industry. One content reviewer who left in October 2016, reportedly earned $24 an hour.
Being a content reviewer can be a grueling job, given their days are spent poring over photos, videos and text that can be extremely violent, graphic or prurient.
To combat that, Facebook now offers training and support which includes regular access to psychologists and therapists.
Over time, Zuckerberg added the social network plans on shifting to a method where more of the content on its site is flagged by artificial intelligence tools developed inside the social network. Already, Facebook’s AI tools are used to flag 99% of ISIS and Al Qaeda content before any Facebook user sees it.
Some problems lend themselves more easily to AI solutions than others, Zuckerberg said during the Congressional hearings, pointing to hate speech as one of the most challenging, given how “linguistically nuanced” it can be. However, he expressed optimism that some time within the next five to 10 years, the social network will have AI tools that can successfully navigate those nuances.
“But today, we’re just not there on that,” Zuckerberg acknowledged. “So a lot of this is still reactive. People flag it to us. We have people look at it. We have policies to try to make it as not subjective as possible. But, until we get it more automated, there is a higher error rate than I’m happy with.”
A Facebook spokesperson echoed Zuckerberg’s comments and said the social network continues to make strides on that front.
“We are working on improving our AI and automation to help us increase accuracy and speed,” the spokesperson told Yahoo Finance. “But algorithms are not yet as good as people when it comes to understanding context, which is why people remain at the center of the content review process.”
Which means, in the short-term, as Facebook beefs up its efforts to thwart inappropriate content, there won’t be a shortage of content reviewer roles for interested job seekers — provided, of course, they are qualified and can tough it out.
—
JP Mangalindan is the Chief Tech Correspondent for Yahoo Finance covering the intersection of tech and business. Email story tips and musings to jpm@oath.com. Follow him on Twitter or Facebook.
More from JP
Website: LINK