Last month, Sen. Ted Cruz (R-TX) held a hearing to discuss what he perceived to be a conservative bias on social media platforms like Facebook and Twitter. A representative from each of those companies sat before him and a panel predominantly composed of other Republicans. (Most Democratic members decided to skip the “bias” hearing entirely.)
“Not only does Big Tech have the power to silence voices with which they disagree, but Big Tech likewise has the power to collate a person’s feed so they only receive the news that comports with their own political agenda,” Cruz said at the hearing.
Cruz conceded that most of his party’s complaints were derived solely from personal stories. “Much of the argument on this topic is anecdotal. It’s based on one example or another example,” Cruz said last month. “There’s a reason for that: because we have no data. There is no transparency. Nobody knows how many speakers Facebook is blocking, how many speakers Twitter is blocking. Nobody knows what the raw data is in terms of bias.”
That lack of data has been a crucial force behind Republicans’ accusations that social networks are biased against conservatives, piggybacking off of rising left-wing concerns about data privacy and market power. Again and again, conservatives, like Cruz and Sen. Josh Hawley (R-MO) have used personal stories and anecdotes to stoke resentment against platforms and their moderators. And with no broader data to disprove them, the anecdotes are hard to argue with.
On Wednesday, the White House took those theories one step further with a new tool for people who feel as though they’ve been censored by social media companies like Facebook and Twitter. It’s a mystifying tool, working equally well as a threat to social media companies and a list-building tool for the Trump campaign. But the true point of the form could be to fill the data gap pointed out by Cruz and further reinforce Trump supporters’ sense that they’re being victimized by moderators.
It would be difficult for the White House to take any direct action in response to reports submitted to the form. There’s simply no real White House power over the moderation decisions made by a third party, and both the First Amendment and Section 230 of the Communications Decency Act could prove to be a problem if the White House tries to intervene. There’s also a strong call for anyone submitting to subscribe to an associated newsletter, raising concerns that the entire project is designed to build out the campaign’s fundraising lists.
But in light of Cruz’s concerns last month, the project could also serve a more alarming purpose. Heavily promoted by White House accounts and President Trump, the form is most likely to gain submissions from users who already agree that bias against conservatives is a problem, heavily skewing the results. In the next congressional hearing, Republicans could point to White House data indicating that the vast majority of moderation incidents target conservatives, all based on the White House form. With no hard data to say otherwise, it could turn into a powerful talking point.
Of course, platforms do have the data — they’re just not sharing it. At that same hearing, Twitter’s director of public policy and philanthropy Carlos Monje Jr. said the company had conducted its own political bias study that suggested there was no political bias on the platform. “Our quality filtering and ranking algorithms do not result in Tweets by Democrats or Tweets by Republicans being viewed any differently,” Monje said in his opening remarks. “Their performance is the same because the Twitter platform itself does not take sides.”
But when prompted to release the study, Twitter has declined, pointing lawmakers toward the transparency reports they put out twice a year that do not include a breakdown of posts removed from people of different political affiliations. Platforms have said that they don’t tag posts based on political parties, but without independently derived datasets, it’s impossible to disprove these claims from conservatives.
Facebook’s work with Social Science One could be a powerful example here. The company has millions of dollars to researchers and universities to study the impact of fake news on its platform, with results to be made public once work is complete. A similar effort directed at this alleged political bias could help guide the moderation conversation back to more factually solid ground.
There are obvious reasons why this hasn’t happened. A release of this data could be embarrassing for these companies, depending on what the researchers find. Even if the data shows moderation practices at their best, a full release could be unpredictable, leading the conversation to uncomfortable places. But with the Trump administration likely preparing to make conservative bias on social media a major 2020 policy issue, platforms simply can’t afford to let the White House set the terms of the debate. No matter how bad, platform data would be a better look than whatever this form produces.