Connect with us

Tech

Why then attack on the Capitol was inevitable

Published

on

Why then attack on the Capitol was inevitable


Maybe you saw this coming nearly a decade ago, when #YourSlipIsShowing laid bare how racist Twitter users were impersonating Black women on the internet. Maybe, for you, it was during Gamergate, the online abuse campaign targeting women in the industry. Or maybe it was the mass shooting in Christchurch, when a gunman steeped in the culture of 8chan livestreamed himself murdering dozens of people. 

Maybe it was when you, or your friend, or your community, became the target of an extremist online mob, and you saw online anger become real world danger and harm. 

Or maybe what happened on Wednesday, when a rabble of internet-fuelled Trump supporters invaded the Capitol, came as a surprise.

For weeks they had been planning their action in plain sight on the internet—but they have been showing you who they are for years. The level of shock you feel right now about the power and danger of online extremism depends on whether you were paying attention. 

The consequences of inaction

The mob who tried to block Congress from confirming Joe Biden’s presidential victory  showed how the stupidity and danger of the far-right internet could come into the real world again, but this time it struck at the center of the US government. Neo-nazi streamers weren’t just inside the Capitol, they were putting on a show for audiences of tens of thousands of people who egged them on in the chats. The mob was having fun doing memes in the halls of American democracy as a woman—a Trump supporter whose social media history shows her devotion to QAnon—was killed trying to break into Congressional offices.

The past year, especially since the pandemic, has been one giant demonstration of the consequences of inaction; the consequences of ignoring the many, many people who have been begging social media companies to take the meme-making extremists and conspiracy theorists that have thrived on their platforms seriously. 

Facebook and Twitter acted to slow the rise of QAnon over the summer, but only after the pro-Trump conspiracy theory was able to grow relatively unrestricted there for three years. Account bans and algorithm tweaks have long been too little, too late to deal with racists, extremists and conspiracy theorists, and they have rarely addressed the fact that these powerful systems were working exactly as intended.    

I spoke with a small handful of the people who could have told you this was coming about this for a story in October. Researchers, technologists, and activists told me that major social media companies have, for the entirety of their history, chosen to do nothing, or to act only after their platforms cause abuse and harm. 

Ariel Waldman tried to get Twitter to meaningfully address abuse there in 2008. Researchers like Shafiqah Hudson, I’Nasah Crockett, and Shireen Mitchell have tracked exactly how harassment works and finds an audience on these platforms for years. Whitney Phillips talked about how she’s haunted by laughter—not just from other people, but also her own—back in the earliest days of her research into online culture and trolling, when overwhelmingly white researchers and personalities treated the extremists among them as edgy curiosities.

Many, many people who have been begging social media companies to take the meme-making extremists and conspiracy theorists seriously.

Ellen Pao, who briefly served as CEO of Reddit in 2014 and stepped down after introducing the platform’s first anti-harassment policy, was astonished that Reddit had only banned r/The_Donald in June 2020, after evidence had built for years to show that the popular pro-Trump message board served as an organizing space for extremiss and a channel for mob abuse. Of course, by the time it was banned, many of its users had already migrated away from Reddit to TheDonald.win, an independent forum created by the same people who ran the previous version. Its pages were filled with dozens of calls for violence ahead of Wednesday’s rally-turned-attempted-coup. 

Banning Trump doesn’t solve the issue

Facebook, Twitter, and YouTube didn’t create conspiracy thinking, or extremist ideologies, of course. Nor did they invent the idea of dangerous personality cults. But these platforms have—by design—handed those groups the mechanisms to reach much larger audiences much faster, and to recruit and radicalize new converts, even at the expense of the people and communities those ideologies target for abuse. And crucially, even when it was clear what was happening, they chose the minimal amount of change—or decided not to intervene at all. 

In the wake of the attempted coup on the Capitol building, people are again looking at the major social media companies to see how they respond. The focus is on Trump’s personal accounts, which he used to encourage supporters to descend on DC and then praised them when they did. Will he be banned from Twitter? There are compelling arguments for why he should. 

But as heavy and consequential as that would be, it’s also, in other ways… not. Abuse, harassment, conspiracy thinking, and racism will still be able to benefit from social media companies that remain interested in only acting when it’s too late, even without Trump retweeting them and egging them on. 

Facebook has banned Trump indefinitely, and also increased the extent of their moderation of groups, where a lot of conspiracy-fueled activity lives. These changes are good, but again, not new: people have told Facebook about this for years; Facebook employees have told Facebook about this for years. Groups were instrumental in organizing Stop the Steal protests in the days after the election, and before that, in anti-mask protests, and before that in spreading fake news, and before that in as a central space for anti-vaccine misinformation. None of this is new. 

There are only so many ways to say that more people should have listened. If you’re paying attention now, maybe you’ll finally start hearing what they say. 

Tech

Why more countries need covid vaccines, not just the richest ones

Published

on

Why more countries need covid vaccines, not just the richest ones


There may also be another disparity. There are many people who, even if you offer them the vaccine, will not take it. And that’s partly because of the distrust. There is a much higher level of distrust among Latino and Black Americans, partly because of historical mistreatment. 

Q: How are you seeing mistrust affect global vaccination disparities more globally?

A: When we think about mistrust on a global scale, that may be partly because of how the pharmaceutical industry prices things and how they have patents. Some countries may be thinking, “these companies from the US or Europe are really trying to sell us their expensive vaccines. But we can’t really afford them for our population in the first place because they are patented, and we are not allowed to just make a generic version of it.” They may be thinking “these companies are just trying to take advantage of us.” And there certainly have been examples of lower-income countries that have been exploited by the pharmaceutical industry. 

“Inequitable vaccine allocation definitely will disrupt the supply chain for all, including the wealthiest nations that have come to depend on cheap sources of labor.”

In Indonesia, for example, this happened with H5N1. Whenever there’s an outbreak, if you’re a WHO member, you send samples to a WHO lab and they try to find out about this particular virus or disease. Based on genetic material sent from Indonesia, scientists developed therapeutics for H5N1 and tried to sell them back to Indonesia. Then Indonesia thought, “OK, these were our samples. Should there not have been collaboration? You’re using them to sell drugs back to us.”

Q: Does the US have a moral obligation to send people to other countries to help with vaccinations?

A: One of the problems is that we’re not able to train enough people in the local places. For Covax or other kinds of international collaboration, it’s not about sending people so much as it’s about “how do we help them build up their own infrastructure?” Even financial resources for training courses or other kinds of ways to beef up their own human resources. Because you can imagine we’d go, and then we’d leave, and they’re not any better in terms of infrastructure.

Q: How would it affect higher-income countries if other, lower-income countries don’t receive their vaccines until later? Recent research says, for example, that if poor countries don’t get vaccines, it will disrupt the economy for everyone. 

A: While it’s still likely that at the human level, people in the most vulnerable countries will suffer more, inequitable vaccine allocation definitely will disrupt the supply chain for all, including —perhaps even especially—the wealthiest nations that have come to depend on cheap sources of labor. If supplying nations have lots of people being sick, or they have to shut down, [there are] no workers to process or transport the raw materials, or to manufacture and deliver the products. People in these countries also can’t travel or spend money, which can greatly affect international hotel chains, airlines, and hospitality industries as well.

This would apply within a high-income country too. If undocumented workers, farm workers, homeless people, and others in low-wage jobs can’t get vaccinated, they can’t work to keep the supply chain going. So restaurants, entertainment industries, etc. would suffer. If they can’t pay the rent or mortgage or have extra money, that also affects the rest of the economy.

This story is part of the Pandemic Technology Project, supported by the Rockefeller Foundation.

Continue Reading

Tech

Tech is having a reckoning. Tech investors? Not so much.

Published

on

Tech is having a reckoning. Tech investors? Not so much.


They have also been indirect beneficiaries of the insurrection at the Capitol, with spikes in users as a result of the mainstream services’ deplatforming President Trump, his surrogates, and accounts promoting the QAnon conspiracy.

In a few cases, public pressure has forced action. DLive, a cryptocurrency-based video streaming site, which was acquired by BitTorrent’s Tron Foundation in October 2020, suspended or permanently banned accounts, channels, and individual broadcasts after the Southern Poverty Law Center identified those that livestreamed the attack from inside the Capitol building.

Neither Tron Foundation, which owns DLive, nor Medici Ventures, the Overstock subsidiary that invested in Minds, responded to requests for comment. 

EvoNexus, a Southern California-based tech incubator that helped fund the self-described “non-biased” social network CloutHub, forwarded our request for comment to CloutHub’s PR team, who denied that its platform was used in the planning of the insurrection. They said that a group started on the platform and promoted by founder Jeff Brain was merely for organizing ride sharing to the Trump rally on January 6. The group, it said, “was for peaceful activities only and asked that members report anyone talking about violence.”  

But there’s a fine line between speech and action, says Margaret O’Mara, a historian at the University of Washington who studies the intersection between technology and politics. When, as a platform “you decide you’re not going to take sides, and you’re going to be an unfettered platform for free speech,” and then people “saying horrible things” is “resulting in action,” then platforms need to reckon with the fact that “we are a catalyst of this, we are becoming an organizing platform for this.” 

“Maybe you wouldn’t get dealflow”

For the most part, says O’Donnell, investors are worried that expressing an opinion about those companies might limit their ability to make deals, and therefore make money.

Even venture capital firms “have to depend on pools of money elsewhere in the ecosystem,” he says. “The worry was that maybe you wouldn’t get dealflow,” or that you’d be labeled as “difficult to work with or, you know, picking off somebody who might do the next round of your company.” 

Despite this, however, O’Donnell says he does not believe that investors should completely avoid “alt tech.” Tech investors like disruption, he explains, and they see in alt tech the potential to “break up the monoliths.” 

“Could that same technology be used for coordinating among people in doing bad stuff? Yeah, it’s possible, just in the same way that people use phones to commit crimes,” he says, adding that this issue can be resolved by having the right rules and procedures in place. 

“There’s some alternative tech whose DNA is about decentralization, and there’s some alt-tech whose DNA is about a political perspective,” he says. He does not consider Gab, for example, to be a decentralized platform, but rather “a central hosting hub for people who otherwise violate the terms of service of other platforms.”

“It’s going to be pissing in the wind… because that guy over there is going to be in.”

Charlie O’Donnell

“The internet is decentralized, right? But we have means for creating databases of bad actors, when it comes to spam, when it comes to denial of service attacks,” he says, suggesting the same could be true of bad actors on alt tech platforms. 

But overlooking the more dangerous sides of these communications platforms, and how their design often facilitates dangerous behavior is a mistake, says O’Mara. “It’s a kind of escapism that runs through the response that powerful people in tech … have, which is just, if we have alternative technologies, if we just have a decentralized internet, if we just have Bitcoin” … then everything will be better.

She calls this position “idealistic” but “very unrealistic,” and a reflection of “a deeply rooted piece of Silicon Valley culture. It goes all the way back to, ‘We don’t like the world as it is, so we’re gonna build this alternative platform on which to revise social relationships.’” 

The problem, O’Mara adds, is that these solutions are “very technology driven” and “chiefly promulgated by pretty privileged people that … have a hard time … [imagining] a lot of the social politics. So there’s not a real reckoning with structural inequality or other systems that need to be changed.”

How to have “a transformational effect”

Some believe that tech investors could shift what kind of companies get built, if they chose to. 

“If venture capitalists committed to not investing in predatory business models that incite violence, that would have a transformational effect,” says McNamee.

At an individual level, they could ask better questions even before investing, says O’Donnell, including avoiding companies without content policies, or requesting that companies create them before a VC signs on.

Once invested, O’Donnell adds that investors can also sell their shares, including at a loss, if they truly wanted to take a stand. But he recognizes the tall order that this would represent—after all, it’s highly likely that a high-growth startup will simply find a different source of money to step in to the space that a principled investor just vacated. “It’s going to be pissing in the wind,” he says, “Because that guy over there is going to be in.”

In other words, a real reckoning among VCs would require a reorientation of how Silicon Valley thinks, and right now it is still focused on “one, and only one, metric that matters, and that’s financial return,” says Freada Kapor Klein.

If funders changed their investment strategies—to put in moral clauses against companies that profit from extremism, for example, as O’Donnell suggested—the impact that this would have on what startup founders chase would be enormous, says O’Mara. “People follow the money,” she says, but “it’s not just money, it’s mentorship, it’s how you build a company, it’s this whole set of principles about what success looks like.” 

“It would have been great if VCs who pride themselves on risk-taking and innovation and disruption … led the way,” concludes Kapor Klein. “But this tsunami is coming. And they will have to change.”

Correction: Brooklyn Bridge Ventures is an investor in Clubhouse, a product management software company, not Clubhouse, the social network as originally stated.

Continue Reading

Tech

The Biden administration’s AI plans: what we might expect

Published

on

The Biden administration’s AI plans: what we might expect


I suspect we will see OSTP emphasize tech accountability under her leadership, which will be especially pertinent to hot button AI issues like facial recognition, algorithmic bias, data privacy, corporate influence on research, and the myriad of other issues that I write about in The Algorithm.

Finally, Biden’s new secretary of state made clear that technology will still be an important geopolitical force. During his Senate confirmation hearing, Antony Blinken remarked that there is “an increasing divide between techno democracies and techno autocracies. Whether techno democracies or techno autocracies are the ones who get to define how tech is used…will go a long way toward shaping the next decades.” As pointed out by Politico, this most clearly is an allusion to China, and the idea that the US is in a race with the country to develop emerging technologies like AI and 5G. OneZero’s Dave Gershgorn reported in 2019 that this had become a rallying cry at the Pentagon. Speaking at an AI conference in Washington, Trump’s Secretary of Defense Mark Esper, framed the technological race “in dramatic terms,” wrote Gershgorn: “A future of global authoritarianism or global democracy.”

Blinken’s comments suggest to me that the Biden administration will likely continue this thread from the Trump administration. That means it may continue putting export controls on sensitive AI technologies and placing bans on Chinese tech giants to do business with American entities. It’s possible the administration may also invest more in building up the US’s high-tech manufacturing capabilities in an attempt to disentangle its AI chip supply chain from China.

Correction: Jack Clark is the former, not current, policy director at OpenAI.

Continue Reading

Copyright © 2020 Diliput News.