Today let’s talk about the highest-profile conflict to date between Meta and its Oversight Board, an independent organization the company established to help it navigate the most difficult questions related to policy and content moderation.
Since before the board was created, it has faced criticism that it primarily serves a public-relations function for the company formerly known as Facebook. The board relies on funding from Meta, it has a contractual relationship with it governing its use of user data, and its founding members were hand-picked by the company.
Aiding in the perception that it’s mostly a PR project is the fact that to date, Meta and the board have rarely been in conflict. In the first quarter of its existence, of 18 recommendations the board made to Meta, the company implemented 14. And even though it often rules against Facebook’s content moderators, ordering removed posts to be restored, none of those reversals has generate any significant controversy. (Also, from Facebook’s perspective, the more the board reverses it, the more credible it is, and thus the more blame it can shoulder for any unpopular calls.)
That’s what made this week’s statements, published by both sides, so noteworthy.
After Russia’s invasion of Ukraine in February, Meta had asked the board to issue an advisory opinion on how it should moderate content during wartime. The conflict had raised a series of difficult questions, including under what circumstances users can post photos of dead bodies or videos of prisoners of war criticizing the conflict.
And in the most prominent content moderation question of the invasion to date, Meta decided to temporarily permit calls for violence against Russian soldiers, Vladimir Putin, and others.
All of which raised important questions about the balance between free expression and user safety. But after asking the board to weigh in, Meta changed its mind — and asked board members to say nothing at all.
From the company’s blog post:
Late last month, Meta withdrew a policy advisory opinion (PAO) request related to Russia’s invasion of Ukraine that had previously been referred to the Oversight Board. This decision was not made lightly — the PAO was withdrawn due to ongoing safety and security concerns.
While the PAO has been withdrawn, we stand by our efforts related to the Russian invasion of Ukraine and believe we are taking the right steps to protect speech and balance the ongoing security concerns on the ground.
In response, the board said in a statement that it is “disappointed” by the move:
While the Board understands these concerns, we believe the request raises important issues and are disappointed by the company’s decision to withdraw it. The Board also notes the withdrawal of this request does not diminish Meta’s responsibility to carefully consider the ongoing content moderation issues which have arisen from this war, which the Board continues to follow. Indeed, the importance for the company to defend freedom of expression and human rights has only increased.
Both statements were extremely vague, so I spent a day talking with people familiar with the matter who could fill me in on what happened. Here’s what I’ve learned.
One of the most disturbing trends of the past year has been the way that authoritarian governments in general, and Russia in particular, have used the intimidation of employees on the ground to force platforms to do their bidding. Last fall, Apple and Google both removed from their respective stores an app that enabled anti-Putin forces to organize before an election. In the aftermath, we learned that Russian agents had threatened their employees, in person, with jail time or worse.
Life for those employees — and their families — has only become more difficult since Putin’s invasion. The country passed draconian laws outlawing truthful discussion of the war, and the combination of those laws and sanctions from the United States and Europe has forced many platforms to withdraw services from Russia entirely.
In the wake of Meta’s decision to allow calls for violence against the invaders, Russia said that Meta had engaged in “extremist” activities. That potentially put hundreds of Meta employees at risk of being jailed. And while the company has now successfully removed its employees from the country, the extremism language could mean that they will never be allowed to return to the country so long as they work at Meta. Moreover, it could mean that employees’ families in Russia could still be subject to persecution.
There is precedent for both outcomes under Russia’s extremism laws.
So what does the Oversight Board have to do with it?
Meta had asked for a fairly broad opinion about its approach to moderation and Russia. The board has already shown a willingness to make expansive policy recommendations, even on narrower cases submitted by users. After asking for the opinion, the company’s legal and security teams became concerned that anything the board said might somehow be used against employees or their families in Russia, either now or in the future.
Technically, the Oversight Board is a distinct entity from Meta. But plenty of Westerners still refuse to recognize that distinction, and company lawyers worried that Russia wouldn’t, either.
All of this is compounded by the fact that tech platforms have gotten little to no support to date, from either the United States or the European Union, in their struggles to keep key communication services up and running in Russia and Ukraine. It’s not obvious to me what western democracies could do to reduce platforms’ fears about how Russia might treat employees and their families. But discussions with executives at several big tech companies over the past year have made it clear that they all feel like they’re out on a limb.
All that said, the news still represents a significant blow to the Oversight Board’s already fragile credibility — and arguably reduces its value to Facebook. The company spent several years and $130 million to create an independent body to advise it on policy matters. To ask that body for its advice — advice that would not even be binding on the company — and then decide belatedly that such advice might be dangerous calls into question the point of the entire enterprise. If the Oversight Board’s only role is to handle the easy questions, why bother with it at all?
Facebook and the board declined to comment to me beyond their statements. It’s fair to note that despite the reversal here, the company has stood up to Russia in some important ways — including standing by that decision to let Ukrainians call for Putin’s death. Meta could have rolled over for Russia on that one, and chose not to.
At the same time, once again we find that at a crucial moment, Facebook executives fail to properly understand risk and public perception. Russia has been threatening platform employees since at least last September. Whatever danger there was for employees and their families existed well before the moment that Facebook sought an opinion from its board. To realize that only weeks later… well, talk about an oversight.
I’m on record as saying that the Oversight Board has changed Facebook for the better. And when it comes to authoritarians threatening platform employees, tech companies have distressingly few options available to them. The Russia case, in this as in so many other situations, was truly a no-win situation.
But that doesn’t mean it won’t have collateral damage for both Meta and its board. Critics always feared that if the stakes ever got high enough, Facebook would blink and decide to make all the relevant decisions itself. And then Vladimir Putin went and invaded his neighbor, and the critics were proven right.
Source: The Verge