Among the thousands of staff at Facebook, there is one group that talks about the company not as a social network, but as a battlefield.

For those working in Facebook’s information operations disruption — or “info ops” — team, the platform is “terrain” where war is waged; its employees are “defenders” against malicious “attackers”; and they must force their foe “downhill” to a position of weakness.

This is daily jargon for the staff tasked with detecting, and then thwarting, co-ordinated disinformation campaigns that typically originate from Russia, Iran and south-east Asia, and are created to sow “chaos and distrust” with the intention of swaying geopolitical debate.

Made up of several dozen former intelligence operatives, investigative journalists, and hackers globally, and guided by a team of executives from other parts of the business, theirs is an operation that, from a standing start two years ago, has become increasingly slick.

In 2017, the group worked for six months to contain and take down a campaign attributed to the Internet Research Agency, a Russian troll farm. In the second half of 2018, it removed at least 20 campaigns, including one, designed to meddle in the US midterm elections, that took only six hours to analyse and shut down.

But the company still faces scrutiny for its perceived failure to fully stem Russian interference in 2016 and for +the spread of fake news. For the info ops team, this now means acute pressure to double down on its efforts ahead of the US 2020 presidential election, as so-called information warfare becomes more prominent.

In particular, adversaries are swiftly developing new tools and tactics, including distancing themselves from campaigns by tapping a growing number of clandestine marketing and PR companies offering “manipulation for hire”. 

“The pace at which these things are evolving isn’t getting slower,” said Nathaniel Gleicher, who has overseen Facebook’s cyber security policy since 2016 after stints at the White House and as a cyber crime prosecutor at the US Department of Justice. 

“The actors will do more and more to exploit our natural weaknesses; perception of bias, division within society.”

From ‘crazy’ to co-ordinated

Information operations seeking to influence political sentiment have long been used in warfare, dating back as far as Roman times. 

READ ALSO  Norway Criminalizes Hate Speech Against Transgender People... In Private Homes Or Conversations

But social media platforms have provided new turf for groups to operate globally and made it easier for “bad actors” to co-opt strangers into doing their bidding, such as unwittingly spreading false information.

Facebook’s info ops team, which has shuttered campaigns from India to Alabama, had a tricky birth. Mark Zuckerberg initially dismissed the idea that fake news influenced the outcome of the 2016 US election as “crazy”, later apologising for his comments in 2017 as new evidence began to emerge of the extent of Russian meddling.

Later that year, the company carved out an official info ops team within its wider cyber security department, and shortly afterwards launched a more formal process for tackling information operations, fashioning its own definition of what should be stamped out and how.

Threat investigators

The core team is made up of several dozen “threat investigators”, based in Menlo Park, Washington DC, Europe and Asia, each typically handling multiple projects at a time. 

It has grown rapidly from a handful of employees at the end of 2017 to several dozen today, with the company tapping cyber security experts from law enforcement, intelligence agencies and the White House, as well as from the private sector and academia.

The content the info ops team focuses on — across Facebook proper but also Instagram and WhatsApp — must meet two strict criteria. Firstly, it must be “inauthentic”; users are misrepresenting themselves for the purposes of manipulating public debate, for example. The other is that the efforts are co-ordinated in some way. 

“Information operations are fundamentally about weaponising uncertainty; they’re not about achieving a measurable, clear goal so much as they are about increasing distrust,” Mr Gleicher said. 

Staff tend to receive tip-offs about suspect behaviour — either from automated systems that Facebook has set up, or from an external source such as a researcher, academic or government.

They then use data analytics and manual investigations to build out a clearer picture of the group and their tactics. Perpetrators are classified in categories — foreign versus domestic actors, government versus non-state actors, and politically versus financially motivated actors — with different categories assigned different urgency.

“If it’s something that’s related to Russia and we’ve found something that looks like it’s state-sponsored, that probably trumps anything that’s financially motivated because we know how important that is,” said one threat investigator, who declined to be named in order to protect her identity from the bad actors she tracks. 

READ ALSO  Swiss firms narrowly avoid 'Responsible Business' liability as vote divides nation

Staying apolitical

Threat investigators take their findings to a team of 10 to 20 Facebook executives who must decide if the bad actors have indeed violated Facebook’s policies.

If that bar is met, the reviewers must establish how to move as quickly as possible to shut down the campaign, but also balance that with making sure that the takedown has the most disruptive effect. If they move too quickly, something could be missed; if they are slow off the mark, this could have a geopolitical impact. 

Facebook is careful to remain apolitical and to avoid inferring what the precise motivations of a bad actor may be: this is left to third-party researchers such as the Atlantic Council, who publish in-depth reports alongside takedowns. 

Still, Mr Gleicher says the group “regularly” passes information on to law enforcement and governments. 

The process was designed to shield Facebook from accusations of bias against or in favour of any government, as Facebook was increasingly finding instances of politicians running disinformation operations against their own citizens, Mr Gleicher said. Staying neutral is also a move that critics say guards the company against any suggestion that it is an editorialising publishing platform that would therefore need to be regulated as such. 

But David Agranovich, who heads up the threat review process at Facebook and was formerly the director of intelligence for the White House’s National Security Council, said: “We recognise that if we do attempt to engage in some conjecture . . . and we say something and we’re wrong, the consequences are really high.”

Manipulation for hire

In 2017, Facebook listed three main features of information operations at the time: targeted data collection, including account takeovers and data theft; the creation of content including real but also fake news; and false amplification of that content, for example via fake accounts and personae. 

Staff are quick to point to efforts to address these issues: Facebook has developed technology to better weed out fake accounts and it works with third-party fact-checkers. It also ran a pilot ahead of the US midterms to better secure the Facebook accounts of staff working on campaigns.

READ ALSO  Chinese state-owned group points to local government mismanagement for defaults

Meanwhile, the introduction of more transparency around political adverts has made it more arduous and expensive for bad actors to interfere. 

But the team faces new challenges. One is the commercialisation of the space: organised and government-backed troll farms are now being replaced by marketing and PR companies offering manipulation-for-hire.

While the tactics used by these private companies are similar, their motivations — and the actual source of the campaign — are now harder to track.

One non-government domestic campaign in the Philippines, taken down by Facebook, was led by a marketing company with 45m followers. Ahead of the Brazilian elections, several social media marketing companies were behind campaigns, he added. 

“The services they were offering were things like, ‘We will organise people and pay them to post . . . on your behalf, or we have a network of fake accounts, you pay us and then we’re going to use that network to go and comment on your behalf’,” he said. 

“They’re doing it as a service and that in a way disperses the breadth of these type of activities, both geographically and the type of actors that are involved,” Mr Agranovich said. 

Blurred lines

Facebook is also grappling with a shift towards campaigns co-opting real people, such as journalists, to amplify their messages, rather than fake accounts — or using a real identity from the outset. It is a tricky area to police, given the implications for free speech: where does legitimate advocacy end and bad advocacy start?

The nature of some campaigns has also changed. Whereas ahead of the 2016 election, Russian campaigns operated under the radar, some more recent efforts, including those around the US midterms, have operated more flagrantly.

Here, the existence of the operation itself is part of a narrative meant to create uncomfortableness and concern, the team said.

“It’s this desire to make people not trust anything and to co-opt [people] in the field of journalism or to make people distrust Facebook, make people distrust governments and the outcome of an election,” the threat investigator said.

“It’s just that specific type of tactic . . . probably worries a lot of us because successfully combating it requires a whole-of-society response,” she added.



Via Financial Times