That means that losing access to the mainstream platforms will reduce his audience and dilute the reach of his statements, as the deplatforming of far-right figures like Alex Jones and Milo Yiannopoulos shows. Yiannopoulos, who was banned in 2016 for his repeated racist abuse of actress Leslie Jones, complained about the effect that deplatforming had on his income.
“Part of it is because people just don’t remember to go to other websites,” says Joan Donovan, the research director at Harvard’s Shorenstein Center on Media, Politics, and Public Policy. Donovan, a regular MIT Technology Review contributor, points out that the mainstream platforms have built in “bells and whistles” designed to minimize friction and make engaging with content as easy as possible. If Trump were limited to a niche service with limited design and features, such as Parler, she says, it would create an additional barrier to sharing his content.
Communicating through proxies—with smaller followings
Even during @realdonaldtrump’s day-long absence from Twitter, Trump was not entirely silent on the platform. On Thursday, while the president was still unable to post from his personal account, White House social media director Dan Scavino tweeted a statement from the president that conceded the election—but did not concede his claim that the election was stolen. It was picked up by the media, but with 40,000 retweets and 100,000 likes, it fell far short of the hundreds of thousands that typically engage with each of Trump’s own missives.
As a result, it is “casual supporters” that Trump is most likely to lose if he is permanently banned, says Brooking; they “will hear from him less frequently,” which could mean that that “in time, they may become less wedded to the conspiracy theories and falsehoods that he has made a habit of spreading.”
Of course, it depends on whom he’s speaking through. Much of his disinformation around voter fraud, for example, came from a wider “network of content creation,” says Donovan; that is, individuals close to the president who each have large followings themselves, including Rudy Giuliani, Sidney Powell, and Lin Wood, among others. “These are the accounts that I’m most worried about, because these are the people that are incentivized … because they’re making money off of this,” she says.
A Trump “digital media empire” could also be blocked
One route around losing his perch on major social media sites could be for Trump to spin up his own systems to talk directly to supporters. The campaign app for his failed campaign for reelection, for example, had its own news and notification system, which often shared questionable or disproven stories that emphasized the president’s talking points.
Recovering from the SolarWinds hack could take 18 months
SolarWinds Orion, the network management product that was targeted, is used in tens of thousands of corporations and government agencies. Over 17,000 organizations downloaded the infected back door. The hackers were extraordinarily stealthy and specific in targeting, which is why it took so long to catch them—and why it’s taking so long to understand their full impact.
The difficulty of uncovering the extent of the damage was summarized by Brad Smith, the president of Microsoft, in a congressional hearing last week.
“Who knows the entirety of what happened here?” he said. “Right now, the attacker is the only one who knows the entirety of what they did.”
Kevin Mandia, CEO of the security company FireEye, which raised the first alerts about the attack, told Congress that the hackers prioritized stealth above all else.
“Disruption would have been easier than what they did,” he said. “They had focused, disciplined data theft. It’s easier to just delete everything in blunt-force trauma and see what happens. They actually did more work than what it would have taken to go destructive.”
“This has a silver lining”
CISA first heard about a problem when FireEye discovered that it had been hacked and notified the agency. The company regularly works closely with the US government, and although it wasn’t legally obligated to tell anyone about the hack, it quickly shared news of the compromise with sensitive corporate networks.
It was Microsoft that told the US government federal networks had been compromised. The company shared that information with Wales on December 11, he said in an interview. Microsoft observed the hackers breaking into the Microsoft 365 cloud that is used by many government agencies. A day later, FireEye informed CISA of the back door in SolarWinds, a little-known but extremely widespread and powerful tool.
This signaled that the scale of the hack could be enormous. CISA’s investigators ended up working straight through the holidays to help agencies hunt for the hackers in their networks.
These efforts were made even more complicated because Wales had only just taken over at the agency: days earlier, former director Chris Krebs had been fired by Donald Trump for repeatedly debunking White House disinformation about a stolen election.
How Apple’s locked down security gives extra protection to the best hackers
“It’s a double-edged sword,” says Bill Marczak, a senior researcher at the cybersecurity watchdog Citizen Lab. “You’re going to keep out a lot of the riffraff by making it harder to break iPhones. But the 1% of top hackers are going to find a way in and, once they’re inside, the impenetrable fortress of the iPhone protects them.”
Marczak has spent the last eight years hunting those top-tier hackers. His research includes the groundbreaking 2016 “Million Dollar Dissident” report that introduced the world to the Israeli hacking company NSO Group. And in December, he was the lead author of a report titled “The Great iPwn,” detailing how the same hackers allegedly targeted dozens of Al Jazeera journalists.
He argues that while the iPhone’s security is getting tighter as Apple invests millions to raise the wall, the best hackers have their own millions to buy or develop zero-click exploits that let them take over iPhones invisibly. These allow attackers to burrow into the restricted parts of the phone without ever giving the target any indication of having been compromised. And once they’re that deep inside, the security becomes a barrier that keeps investigators from spotting or understanding nefarious behavior—to the point where Marczak suspects they’re missing all but a small fraction of attacks because they cannot see behind the curtain.
This means that even to know you’re under attack, you may have to rely on luck or vague suspicion rather than clear evidence. The Al Jazeera journalist Tamer Almisshal contacted Citizen Lab after he received death threats about his work in January 2020, but Marczak’s team initially found no direct evidence of hacking on his iPhone. They persevered by looking indirectly at the phone’s internet traffic to see who it was whispering to, until finally, in July last year, researchers saw the phone pinging servers belonging to NSO. It was strong evidence pointing toward a hack using the Israeli company’s software, but it didn’t expose the hack itself.
Sometimes the locked-down system can backfire even more directly. When Apple released a new version of iOS last summer in the middle of Marczak’s investigation, the phone’s new security features killed an unauthorized “jailbreak” tool Citizen Lab used to open up the iPhone. The update locked him out of the private areas of the phone, including a folder for new updates—which turned out to be exactly where hackers were hiding.
Faced with these blocks, “we just kind of threw our hands up,” says Marczak. “We can’t get anything from this—there’s just no way.”
Beyond the phone
Ryan Storz is a security engineer at the firm Trail of Bits. He leads development of iVerify, a rare Apple-approved security app that does its best to peer inside iPhones while still playing by the rules set in Cupertino. iVerify looks for security anomalies on the iPhone, such as unexplained file modifications—the sort of indirect clues that can point to a deeper problem. Installing the app is a little like setting up trip wires in the castle that is the iPhone: if something doesn’t look the way you expect it to, you know a problem exists.
But like the systems used by Marczak and others, the app can’t directly observe unknown malware that breaks the rules, and it is blocked from reading through the iPhone’s memory in the same way that security apps on other devices do. The trip wire is useful, but it isn’t the same as a guard who can walk through every room to look for invaders.
Despite these difficulties, Storz says, modern computers are converging on the lockdown philosophy—and he thinks the trade-off is worth it. “As we lock these things down, you reduce the damage of malware and spying,” he says.
This approach is spreading far beyond the iPhone. In a recent briefing with journalists, an Apple spokesperson described how the company’s Mac computers are increasingly adopting the iPhone’s security philosophy: its newest laptops and desktops run on custom-built M1 chips that make them more powerful and secure, in part by increasingly locking down the computer in the same ways as mobile devices.
“iOS is incredibly secure. Apple saw the benefits and has been moving them over to the Mac for a long time, and the M1 chip is a huge step in that direction,” says security researcher Patrick Wardle.
An AI is training counselors to deal with teens in crisis
The chatbot uses GPT-2 for its baseline conversational abilities. That model is trained on 45 million pages from the web, which teaches it the basic structure and grammar of the English language. The Trevor Project then trained it further on all the transcripts of previous Riley role-play conversations, which gave the bot the materials it needed to mimic the persona.
Throughout the development process, the team was surprised by how well the chatbot performed. There is no database storing details of Riley’s bio, yet the chatbot stayed consistent because every transcript reflects the same storyline.
But there are also trade-offs to using AI, especially in sensitive contexts with vulnerable communities. GPT-2, and other natural-language algorithms like it, are known to embed deeply racist, sexist, and homophobic ideas. More than one chatbot has been led disastrously astray this way, the most recent being a South Korean chatbot called Lee Luda that had the persona of a 20-year-old university student. After quickly gaining popularity and interacting with more and more users, it began using slurs to describe the queer and disabled communities.
The Trevor Project is aware of this and designed ways to limit the potential for trouble. While Lee Luda was meant to converse with users about anything, Riley is very narrowly focused. Volunteers won’t deviate too far from the conversations it has been trained on, which minimizes the chances of unpredictable behavior.
This also makes it easier to comprehensively test the chatbot, which the Trevor Project says it is doing. “These use cases that are highly specialized and well-defined, and designed inclusively, don’t pose a very high risk,” says Nenad Tomasev, a researcher at DeepMind.
Human to human
This isn’t the first time the mental health field has tried to tap into AI’s potential to provide inclusive, ethical assistance without hurting the people it’s designed to help. Researchers have developed promising ways of detecting depression from a combination of visual and auditory signals. Therapy “bots,” while not equivalent to a human professional, are being pitched as alternatives for those who can’t access a therapist or are uncomfortable confiding in a person.
Each of these developments, and others like it, require thinking about how much agency AI tools should have when it comes to treating vulnerable people. And the consensus seems to be that at this point the technology isn’t really suited to replacing human help.
Still, Joiner, the psychology professor, says this could change over time. While replacing human counselors with AI copies is currently a bad idea, “that doesn’t mean that it’s a constraint that’s permanent,” he says. People, “have artificial friendships and relationships” with AI services already. As long as people aren’t being tricked into thinking they are having a discussion with a human when they are talking to an AI, he says, it could be a possibility down the line.
In the meantime, Riley will never face the youths who actually text in to the Trevor Project: it will only ever serve as a training tool for volunteers. “The human-to-human connection between our counselors and the people who reach out to us is essential to everything that we do,” says Kendra Gaunt, the group’s data and AI product lead. “I think that makes us really unique, and something that I don’t think any of us want to replace or change.”