Following reports of genocide in Myanmar, Facebook banned the country’s top general and other military leaders who were using the platform to foment hate. The company also bans Hezbollah from its platform because of its status as a US-designated foreign terror organization, despite the fact that the party holds seats in Lebanon’s parliament. And it bans leaders in countries under US sanctions.
At the same time, both Facebook and Twitter have stuck to the tenet that content posted by elected officials deserves more protection than material from ordinary individuals, thus giving politicians’ speech more power than that of the people. This position is at odds with plenty of evidence that hateful speech from public figures has a greater impact than similar speech from ordinary users.
Clearly, though, these policies aren’t applied evenly around the world. After all, Trump is far from the only world leader using these platforms to foment unrest. One need only look to the BJP, the party of India’s Prime Minister Narendra Modi, for more examples.
Though there are certainly short-term benefits—and plenty of satisfaction—to be had from banning Trump, the decision (and those that came before it) raise more foundational questions about speech. Who should have the right to decide what we can and can’t say? What does it mean when a corporation can censor a government official?
Facebook’s policy staff, and Mark Zuckerberg in particular, have for years shown themselves to be poor judges of what is or isn’t appropriate expression. From the platform’s ban on breasts to its tendency to suspend users for speaking back against hate speech, or its total failure to remove calls for violence in Myanmar, India, and elsewhere, there’s simply no reason to trust Zuckerberg and other tech leaders to get these big decisions right.
Repealing 230 isn’t the answer
To remedy these concerns, some are calling for more regulation. In recent months, demands have abounded from both sides of the aisle to repeal or amend Section 230—the law that protects companies from liability for the decisions they make about the content they host—despite some serious misrepresentations from politicians who should know better about how the law actually works.
The thing is, repealing Section 230 would probably not have forced Facebook or Twitter to remove Trump’s tweets, nor would it prevent companies from removing content they find disagreeable, whether that content is pornography or the unhinged rantings of Trump. It is companies’ First Amendment rights that enable them to curate their platforms as they see fit.
Instead, repealing Section 230 would hinder competitors to Facebook and the other tech giants, and place a greater risk of liability on platforms for what they choose to host. For instance, without Section 230, Facebook’s lawyers could decide that hosting anti-fascist content is too risky in light of the Trump administration’s attacks on antifa.
This is not a far-fetched scenario: Platforms already restrict most content that could be even loosely connected to foreign terrorist organizations, for fear that material-support statutes could make them liable. Evidence of war crimes in Syria and vital counter-speech against terrorist organizations abroad have been removed as a result. Similarly, platforms have come under fire for blocking any content seemingly connected to countries under US sanctions. In one particularly absurd example, Etsy banned a handmade doll, made in America, because the listing contained the word “Persian.”
It’s not difficult to see how ratcheting up platform liability could cause even more vital speech to be removed by corporations whose sole interest is not in “connecting the world” but in profiting from it.
Platforms needn’t be neutral, but they must play fair
Despite what Senator Ted Cruz keeps repeating, there is nothing requiring these platforms to be neutral, nor should there be. If Facebook wants to boot Trump—or photos of breastfeeding mothers—that’s the company’s prerogative. The problem is not that Facebook has the right to do so, but that—owing to its acquisitions and unhindered growth—its users have virtually nowhere else to go and are stuck dealing with increasingly problematic rules and automated content moderation.
The answer is not repealing Section 230 (which again, would hinder competition) but in creating the conditions for more competition. This is where the Biden administration should focus its attention in the coming months. And those efforts must include reaching out to content moderation experts from advocacy and academia to understand the range of problems faced by users worldwide, rather than simply focusing on the debate inside the US.
Why more countries need covid vaccines, not just the richest ones
There may also be another disparity. There are many people who, even if you offer them the vaccine, will not take it. And that’s partly because of the distrust. There is a much higher level of distrust among Latino and Black Americans, partly because of historical mistreatment.
Q: How are you seeing mistrust affect global vaccination disparities more globally?
A: When we think about mistrust on a global scale, that may be partly because of how the pharmaceutical industry prices things and how they have patents. Some countries may be thinking, “these companies from the US or Europe are really trying to sell us their expensive vaccines. But we can’t really afford them for our population in the first place because they are patented, and we are not allowed to just make a generic version of it.” They may be thinking “these companies are just trying to take advantage of us.” And there certainly have been examples of lower-income countries that have been exploited by the pharmaceutical industry.
In Indonesia, for example, this happened with H5N1. Whenever there’s an outbreak, if you’re a WHO member, you send samples to a WHO lab and they try to find out about this particular virus or disease. Based on genetic material sent from Indonesia, scientists developed therapeutics for H5N1 and tried to sell them back to Indonesia. Then Indonesia thought, “OK, these were our samples. Should there not have been collaboration? You’re using them to sell drugs back to us.”
Q: Does the US have a moral obligation to send people to other countries to help with vaccinations?
A: One of the problems is that we’re not able to train enough people in the local places. For Covax or other kinds of international collaboration, it’s not about sending people so much as it’s about “how do we help them build up their own infrastructure?” Even financial resources for training courses or other kinds of ways to beef up their own human resources. Because you can imagine we’d go, and then we’d leave, and they’re not any better in terms of infrastructure.
Q: How would it affect higher-income countries if other, lower-income countries don’t receive their vaccines until later? Recent research says, for example, that if poor countries don’t get vaccines, it will disrupt the economy for everyone.
A: While it’s still likely that at the human level, people in the most vulnerable countries will suffer more, inequitable vaccine allocation definitely will disrupt the supply chain for all, including —perhaps even especially—the wealthiest nations that have come to depend on cheap sources of labor. If supplying nations have lots of people being sick, or they have to shut down, [there are] no workers to process or transport the raw materials, or to manufacture and deliver the products. People in these countries also can’t travel or spend money, which can greatly affect international hotel chains, airlines, and hospitality industries as well.
This would apply within a high-income country too. If undocumented workers, farm workers, homeless people, and others in low-wage jobs can’t get vaccinated, they can’t work to keep the supply chain going. So restaurants, entertainment industries, etc. would suffer. If they can’t pay the rent or mortgage or have extra money, that also affects the rest of the economy.
This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.
Tech is having a reckoning. Tech investors? Not so much.
They have also been indirect beneficiaries of the insurrection at the Capitol, with spikes in users as a result of the mainstream services’ deplatforming President Trump, his surrogates, and accounts promoting the QAnon conspiracy.
In a few cases, public pressure has forced action. DLive, a cryptocurrency-based video streaming site, which was acquired by BitTorrent’s Tron Foundation in October 2020, suspended or permanently banned accounts, channels, and individual broadcasts after the Southern Poverty Law Center identified those that livestreamed the attack from inside the Capitol building.
Neither Tron Foundation, which owns DLive, nor Medici Ventures, the Overstock subsidiary that invested in Minds, responded to requests for comment.
EvoNexus, a Southern California-based tech incubator that helped fund the self-described “non-biased” social network CloutHub, forwarded our request for comment to CloutHub’s PR team, who denied that its platform was used in the planning of the insurrection. They said that a group started on the platform and promoted by founder Jeff Brain was merely for organizing ride sharing to the Trump rally on January 6. The group, it said, “was for peaceful activities only and asked that members report anyone talking about violence.”
But there’s a fine line between speech and action, says Margaret O’Mara, a historian at the University of Washington who studies the intersection between technology and politics. When, as a platform “you decide you’re not going to take sides, and you’re going to be an unfettered platform for free speech,” and then people “saying horrible things” is “resulting in action,” then platforms need to reckon with the fact that “we are a catalyst of this, we are becoming an organizing platform for this.”
“Maybe you wouldn’t get dealflow”
For the most part, says O’Donnell, investors are worried that expressing an opinion about those companies might limit their ability to make deals, and therefore make money.
Even venture capital firms “have to depend on pools of money elsewhere in the ecosystem,” he says. “The worry was that maybe you wouldn’t get dealflow,” or that you’d be labeled as “difficult to work with or, you know, picking off somebody who might do the next round of your company.”
Despite this, however, O’Donnell says he does not believe that investors should completely avoid “alt tech.” Tech investors like disruption, he explains, and they see in alt tech the potential to “break up the monoliths.”
“Could that same technology be used for coordinating among people in doing bad stuff? Yeah, it’s possible, just in the same way that people use phones to commit crimes,” he says, adding that this issue can be resolved by having the right rules and procedures in place.
“There’s some alternative tech whose DNA is about decentralization, and there’s some alt-tech whose DNA is about a political perspective,” he says. He does not consider Gab, for example, to be a decentralized platform, but rather “a central hosting hub for people who otherwise violate the terms of service of other platforms.”
“The internet is decentralized, right? But we have means for creating databases of bad actors, when it comes to spam, when it comes to denial of service attacks,” he says, suggesting the same could be true of bad actors on alt tech platforms.
But overlooking the more dangerous sides of these communications platforms, and how their design often facilitates dangerous behavior is a mistake, says O’Mara. “It’s a kind of escapism that runs through the response that powerful people in tech … have, which is just, if we have alternative technologies, if we just have a decentralized internet, if we just have Bitcoin” … then everything will be better.
She calls this position “idealistic” but “very unrealistic,” and a reflection of “a deeply rooted piece of Silicon Valley culture. It goes all the way back to, ‘We don’t like the world as it is, so we’re gonna build this alternative platform on which to revise social relationships.’”
The problem, O’Mara adds, is that these solutions are “very technology driven” and “chiefly promulgated by pretty privileged people that … have a hard time … [imagining] a lot of the social politics. So there’s not a real reckoning with structural inequality or other systems that need to be changed.”
How to have “a transformational effect”
Some believe that tech investors could shift what kind of companies get built, if they chose to.
“If venture capitalists committed to not investing in predatory business models that incite violence, that would have a transformational effect,” says McNamee.
At an individual level, they could ask better questions even before investing, says O’Donnell, including avoiding companies without content policies, or requesting that companies create them before a VC signs on.
Once invested, O’Donnell adds that investors can also sell their shares, including at a loss, if they truly wanted to take a stand. But he recognizes the tall order that this would represent—after all, it’s highly likely that a high-growth startup will simply find a different source of money to step in to the space that a principled investor just vacated. “It’s going to be pissing in the wind,” he says, “Because that guy over there is going to be in.”
In other words, a real reckoning among VCs would require a reorientation of how Silicon Valley thinks, and right now it is still focused on “one, and only one, metric that matters, and that’s financial return,” says Freada Kapor Klein.
If funders changed their investment strategies—to put in moral clauses against companies that profit from extremism, for example, as O’Donnell suggested—the impact that this would have on what startup founders chase would be enormous, says O’Mara. “People follow the money,” she says, but “it’s not just money, it’s mentorship, it’s how you build a company, it’s this whole set of principles about what success looks like.”
“It would have been great if VCs who pride themselves on risk-taking and innovation and disruption … led the way,” concludes Kapor Klein. “But this tsunami is coming. And they will have to change.”
Correction: Brooklyn Bridge Ventures is an investor in Clubhouse, a product management software company, not Clubhouse, the social network as originally stated.
The Biden administration’s AI plans: what we might expect
I suspect we will see OSTP emphasize tech accountability under her leadership, which will be especially pertinent to hot button AI issues like facial recognition, algorithmic bias, data privacy, corporate influence on research, and the myriad of other issues that I write about in The Algorithm.
Finally, Biden’s new secretary of state made clear that technology will still be an important geopolitical force. During his Senate confirmation hearing, Antony Blinken remarked that there is “an increasing divide between techno democracies and techno autocracies. Whether techno democracies or techno autocracies are the ones who get to define how tech is used…will go a long way toward shaping the next decades.” As pointed out by Politico, this most clearly is an allusion to China, and the idea that the US is in a race with the country to develop emerging technologies like AI and 5G. OneZero’s Dave Gershgorn reported in 2019 that this had become a rallying cry at the Pentagon. Speaking at an AI conference in Washington, Trump’s Secretary of Defense Mark Esper, framed the technological race “in dramatic terms,” wrote Gershgorn: “A future of global authoritarianism or global democracy.”
Blinken’s comments suggest to me that the Biden administration will likely continue this thread from the Trump administration. That means it may continue putting export controls on sensitive AI technologies and placing bans on Chinese tech giants to do business with American entities. It’s possible the administration may also invest more in building up the US’s high-tech manufacturing capabilities in an attempt to disentangle its AI chip supply chain from China.
Correction: Jack Clark is the former, not current, policy director at OpenAI.