Staying safer while recording police activity requires different tactics depending on the situation. Bystanders witnessing police violence in a public space should keep a distance, Kelley-Chung advises—that way you can’t be accused of being a participant. If you get pulled over? Get a passenger to start filming right away, before the officer approaches your window (reaching into your pocket for your phone can also be extremely dangerous, particularly for people of color). If it’s legal in your area, a dash cam might be an alternative, Wandt suggests.
As much as a cell-phone camera offers protection, Wandt says, it’s also important to keep in mind that “once somebody takes out a camera and starts filming an arrest, it absolutely changes the nature of the situation for everybody, from the victim to the suspect to the police officer.”
“There’s the law, there’s the Constitution, and then there’s what you do when you’re face to face with the police,” says Sykes, the ACLU attorney. Figuring out exactly how much to push back against a police officer who is giving an unlawful order is “tough,” he says, especially in certain circumstances—for example, at a protest.
“There is a special flavor of risk when you’re protesting the police and the police are armed and standing feet away from you,” Sykes says.
On-the-ground experience is really the only way to read whether a situation at a protest is safe. But one thing Kelley-Chung has observed is that the presence of a camera filming an officer can protect others from misconduct.
“When you see people in a verbal dispute with police, get as close as possible,” he says. “That camera can be more protection than a tactical vest.”
In any situation, everyone we spoke to had the same caveats: Do not interfere in police operations. Comply when police tell you that you need to move, but you do not have to stop filming from a new location, even if they claim you must, as long as you are recording an officer in a public space carrying out their duties.
Cop watchers generally advise others to collect identifying information on police at the scene, and to note the time and location. You could ask for a badge number; Parriott says most officers actually just carry business cards.
A mine of misinformation
No single video is going to change how police act, and experts argue that even large numbers of videos cannot change the culture of many police departments. On the contrary, police have found ways to use video, especially body camera footage, to reinforce and control their own narrative in cases of possible violence or misconduct.
People like to think that video is simply a neutral tool for capturing information, says Jennifer Grygiel, an assistant professor of communications at Syracuse University—but it’s not, and how it’s released, and in what context, needs additional vetting.
“They get to set the narrative when it’s released, which controls the initial public sentiment around it and opinion. They also push it out on their social media, and their accounts are just like everybody else’s in that they grow their audience. So then they get people following them there because they’re the first to publish information,” Grygiel says. Their own research deals with how police departments use social media to bypass fact-checking by journalists: it started after they noticed how police were pushing out mugshots on local Facebook pages. “People were going in there, like an old public square, and harassing people who had been arrested,” Grygiel says.
As police become better at producing their own media, finding an audience outside of journalism, and making the most of accountability measures like body cameras, Grygiel argues, independent documentation of police officers working in public can serve as a counter to that messaging. Sometimes, as was in the case with the Floyd murder, that documentation happens spontaneously, and often amid great distress, when clear instances of police violence or misconduct are unfolding in real time.
But the capacity for police and police-affiliated organizations to spread misinformation was obvious during the protests in the summer of 2020, when police departments repeatedly promoted inaccurate information. Some of that misinformation went viral, aided by sympathetic media coverage and the right-wing internet, hell-bent on reinforcing the belief that anti-racism protests are merely a conduit for a violent war on cops.
Police unions promoted an alarming claim that Shake Shack employees had “intentionally poisoned” a group of police officers in Manhattan. The story had been dispelled by the next morning: NYPD investigators said the foul-tasting substance in the three officers’ milkshakes wasn’t “bleach,” as the unions speculated, and it wasn’t added to the drinks on purpose. Although the Police Benevolent Association and the Detectives’ Endowment Association both eventually deleted their tweets making the accusation, they had tens of thousands of retweets, and triggered a wave of credulous coverage in conservative and mainstream press. Media write-ups about the tweets got tens of thousands of shares on Facebook and continued to circulate even after the story was debunked.
And this was just one example. Last summer, NYPD Commissioner Dermot Shea reposted a video of police removing bins of bricks from a South Brooklyn sidewalk, claiming they were the work of “organized looters” offering protesters materials to use for violence, despite little evidence that this was actually true. The NYPD also circulated an alert to officers with images of coffee cups filled with concrete, which closely resemble concrete samples used on construction sites. In Columbus, Ohio, the police tweeted out a photo of a colorful bus that they said was supplying dangerous equipment to “rioters,” fueling already rampant national rumors of “antifa buses” descending on cities. In fact, the bus belonged to a group of circus performers, who said the equipment police cited as riot supplies included juggling clubs and kitchen utensils.
In short, police still lie despite being watched more closely than ever. There are hundreds of videos of police misconduct at the summer protests alone, some from the body cams introduced in reforms meant to hold them more accountable. But Kelley-Chung thinks there’s only so much difference any one video can make.
“I’ve seen people filming officers with their cameras out in the moment and then get tackled by police,” he says. “They know they’re on camera … and yet they still continue to abuse.”
And even after he reached his settlement with the DC police, there’s an aspect of that day he can’t stop thinking about. Kelley-Chung is Black, and his filming partner, Andrew Jasiura, is white. They were both dressed in the same T-shirt, carrying the same sort of camera equipment. Officers saw Jasiura too: “They pulled him out so they could talk to him,” says Kelley-Chung.
That’s when Jasiura told police that his partner was a journalist too. They continued to arrest him anyway.
The pandemic slashed the West Coast’s emissions. Wildfires already reversed it.
That’s far above normal levels for this part of the year and comes on top of the surge of emissions from the massive fires across the American West in 2020. California fires alone produced more than 100 million tons of carbon dioxide last year, which was already enough to more than cancel out the broader region’s annual emissions declines.
“The steady but slow reductions in [greenhouse gases] pale in comparison to those from wildfire,” says Oriana Chegwidden, a climate scientist at CarbonPlan.
Massive wildfires burning across millions of acres in Siberia are also clogging the skies across eastern Russia and releasing tens of millions of tons of emissions, Copernicus reported earlier this month.
Fires and forest emissions are only expected to increase across many regions of the world as climate change accelerates in the coming decades, creating the hot and often dry conditions that turn trees and plants into tinder.
Fire risk—defined as the chance that an area will experience a moderate- to high-severity fire in any given year—could quadruple across the US by 2090, even under scenarios where emissions decline significantly in the coming decades, according to a recent study by researchers at the University of Utah and CarbonPlan. With unchecked emissions, US fire risk could be 14 times higher near the end of the century.
Emissions from fires are “already bad and only going to get worse,” says Chegwidden, one of the study’s lead authors.
Over longer periods, the emissions and climate impacts of increasing wildfires will depend on how rapidly forests grow back and draw carbon back down—or whether they do at all. That, in turn, depends on the dominant trees, the severity of the fires, and how much local climate conditions have changed since that forest took root.
While working toward her doctorate in the early 2010s, Camille Stevens-Rumann spent summer and spring months trekking through alpine forests in Idaho’s Frank Church–River of No Return Wilderness, studying the aftermath of fires.
She noted where and when conifer forests began to return, where they didn’t, and where opportunistic invasive species like cheatgrass took over the landscape.
In a 2018 study in Ecology Letters, she and her coauthors concluded that trees that burned down across the Rocky Mountains have had far more trouble growing back this century, as the region has grown hotter and drier, than during the end of the last one. Dry conifer forests that had already teetered on the edge of survivable conditions were far more likely to simply convert to grass and shrublands, which generally absorb and store much less carbon.
This can be healthy up to a point, creating fire breaks that reduce the damage of future fires, says Stevens-Rumann, an assistant professor of forest and rangeland stewardship at Colorado State University. It can also help to make up a bit for the US’s history of aggressively putting out fires, which has allowed fuel to build up in many forests, also increasing the odds of major blazes when they do ignite.
But their findings are “very ominous” given the massive fires we’re already seeing and the projections for increasingly hot, dry conditions across the American West, she says.
Other studies have noted that these pressures could begin to fundamentally transform western US forests in the coming decades, damaging or destroying sources of biodiversity, water, wildlife habitat, and carbon storage.
Fires, droughts, insect infestations, and shifting climate conditions will convert major parts of California’s forests into shrublands, according to a modeling study published in AGU Advances last week. Tree losses could be particularly steep in the dense Douglas fir and coastal redwood forests along the Northern California coast and in the foothills of the Sierra Nevada range.
All told, the state will lose around 9% of the carbon stored in trees and plants aboveground by the end of this century under a scenario in which we stabilize emissions this century, and more than 16% in a future world where they continue to rise.
Among other impacts, that will clearly complicate the state’s reliance on its lands to capture and store carbon through its forestry offsets program and other climate efforts, the study notes. California is striving to become carbon neutral by 2045.
Meanwhile, medium- to high-emissions scenarios create “a real likelihood of Yellowstone’s forests being converted to non-forest vegetation during the mid-21st century,” because increasingly common and large fires would make it more and more difficult for trees to grow back, a 2011 study in Proceedings of the National Academy of Sciences concluded.
The global picture
The net effect of climate change on fires, and fires on climate change, is much more complicated globally.
Fires contribute directly to climate change by releasing emissions from trees as well as the rich carbon stored in soils and peatlands. They can also produce black carbon that may eventually settle on glaciers and ice sheets, where it absorbs heat. That accelerates the loss of ice and the rise of ocean levels.
But fires can drive negative climate feedback as well. The smoke from Western wildfires that reached the East Coast in recent days, while terrible for human health, carries aerosols that reflect some level of heat back into space. Similarly, fires in boreal forests in Canada, Alaska, and Russia can open up space for snow that’s far more reflective than the forests they replaced, offsetting the heating effect of the emissions released.
Different parts of the globe are also pushing and pulling in different ways.
Climate change is making wildfires worse in most forested areas of the globe, says James Randerson, a professor of earth system science at the University of California, Irvine, and a coauthor of the AGU paper.
But the total area burned by fires worldwide is actually going down, primarily thanks to decreases across the savannas and grasslands of the tropics. Among other factors, sprawling farms and roads are fragmenting the landscape in developing parts of Africa, Asia, and South America, acting as breaks for these fires. Meanwhile, growing herds of livestock are gobbling up fuels.
Overall, global emissions from fires stand at about a fifth the levels from fossil fuels, though they’re not rising sharply as yet. But total emissions from forests have clearly been climbing when you include fires, deforestation and logging. They’ve grown from less than 5 billion tons in 2001 to more than 10 billion in 2019, according to a Nature Climate Change paper in January.
Less fuel to burn
As warming continues in the decades ahead, climate change itself will affect different areas in different ways. While many regions will become hotter, drier, and more susceptible to wildfires, some cooler parts of the globe will become more hospitable to forest growth, like the high reaches of tall mountains and parts of the Arctic tundra, Randerson says.
Global warming could also reach a point where it actually starts to reduce certain risks as well. If Yellowstone, California’s Sierra Nevada, and other areas lose big portions of their forests, as studies have suggested, fires in those areas could begin to tick back down toward the end of the century. That’s because there’ll simply be less, or less flammable, fuel to burn.
Worldwide fire levels in the future will ultimately depend both on the rate of climate change as well as human activity, which is the main source of ignitions, says Doug Morton, chief of the biospheric sciences laboratory at NASA’s Goddard Space Flight Center.
Meet the people who warn the world about new covid variants
In March 2020, when the WHO declared a pandemic, the public sequence database GISAID held 524 covid sequences. Over the next month scientists uploaded 6,000 more. By the end of May, the total was over 35,000. (In contrast, global scientists added 40,000 flu sequences to GISAID in all of 2019.)
“Without a name, forget about it—we cannot understand what other people are saying,” says Anderson Brito, a postdoc in genomic epidemiology at the Yale School of Public Health, who contributes to the Pango effort.
As the number of covid sequences spiraled, researchers trying to study them were forced to create entirely new infrastructure and standards on the fly. A universal naming system has been one of the most important elements of this effort: without it, scientists would struggle to talk to each other about how the virus’s descendants are traveling and changing—either to flag up a question or, even more critically, to sound the alarm.
Where Pango came from
In April 2020, a handful of prominent virologists in the UK and Australia proposed a system of letters and numbers for naming lineages, or new branches, of the covid family. It had a logic, and a hierarchy, even though the names it generated—like B.1.1.7—were a bit of a mouthful.
One of the authors on the paper was Áine O’Toole, a PhD candidate at the University of Edinburgh. Soon she’d become the primary person actually doing that sorting and classifying, eventually combing through hundreds of thousands of sequences by hand.
She says: “Very early on, it was just who was available to curate the sequences. That ended up being my job for a good bit. I guess I never understood quite the scale we were going to get to.”
She quickly set about building software to assign new genomes to the right lineages. Not long after that, another researcher, postdoc Emily Scher, built a machine-learning algorithm to speed things up even more.
They named the software Pangolin, a tongue-in-cheek reference to a debate about the animal origin of covid. (The whole system is now simply known as Pango.)
The naming system, along with the software to implement it, quickly became a global essential. Although the WHO has recently started using Greek letters for variants that seem especially concerning, like delta, those nicknames are for the public and the media. Delta actually refers to a growing family of variants, which scientists call by their more precise Pango names: B.1.617.2, AY.1, AY.2, and AY.3.
“When alpha emerged in the UK, Pango made it very easy for us to look for those mutations in our genomes to see if we had that lineage in our country too,” says Jolly. “Ever since then, Pango has been used as the baseline for reporting and surveillance of variants in India.”
Because Pango offers a rational, orderly approach to what would otherwise be chaos, it may forever change the way scientists name viral strains—allowing experts from all over the world to work together with a shared vocabulary. Brito says: “Most likely, this will be a format we’ll use for tracking any other new virus.”
Many of the foundational tools for tracking covid genomes have been developed and maintained by early-career scientists like O’Toole and Scher over the last year and a half. As the need for worldwide covid collaboration exploded, scientists rushed to support it with ad hoc infrastructure like Pango. Much of that work fell to tech-savvy young researchers in their 20s and 30s. They used informal networks and tools that were open source—meaning they were free to use, and anyone could volunteer to add tweaks and improvements.
“The people on the cutting edge of new technologies tend to be grad students and postdocs,” says Angie Hinrichs, a bioinformatician at UC Santa Cruz who joined the project earlier this year. For example, O’Toole and Scher work in the lab of Andrew Rambaut, a genomic epidemiologist who posted the first public covid sequences online after receiving them from Chinese scientists. “They just happened to be perfectly placed to provide these tools that became absolutely critical,” Hinrichs says.
It hasn’t been easy. For most of 2020, O’Toole took on the bulk of the responsibility for identifying and naming new lineages by herself. The university was shuttered, but she and another of Rambaut’s PhD students, Verity Hill, got permission to come into the office. Her commute, walking 40 minutes to school from the apartment where she lived alone, gave her some sense of normalcy.
Every few weeks, O’Toole would download the entire covid repository from the GISAID database, which had grown exponentially each time. Then she would hunt around for groups of genomes with mutations that looked similar, or things that looked odd and might have been mislabeled.
When she got particularly stuck, Hill, Rambaut, and other members of the lab would pitch in to discuss the designations. But the grunt work fell on her.
Deciding when descendants of the virus deserve a new family name can be as much art as science. It was a painstaking process, sifting through an unheard-of number of genomes and asking time and again: Is this a new variant of covid or not?
“It was pretty tedious,” she says. “But it was always really humbling. Imagine going through 20,000 sequences from 100 different places in the world. I saw sequences from places I’d never even heard of.”
As time went on, O’Toole struggled to keep up with the volume of new genomes to sort and name.
In June 2020, there were over 57,000 sequences stored in the GISAID database, and O’Toole had sorted them into 39 variants. By November 2020, a month after she was supposed to turn in her thesis, O’Toole took her last solo run through the data. It took her 10 days to go through all the sequences, which by then numbered 200,000. (Although covid has overshadowed her research on other viruses, she’s putting a chapter on Pango in her thesis.)
Fortunately, the Pango software is built to be collaborative, and others have stepped up. An online community—the one that Jolly turned to when she noticed the variant sweeping across India—sprouted and grew. This year, O’Toole’s work has been much more hands-off. New lineages are now designated mostly when epidemiologists around the world contact O’Toole and the rest of the team through Twitter, email, or GitHub— her preferred method.
“Now it’s more reactionary,” says O’Toole. “If a group of researchers somewhere in the world is working on some data and they believe they’ve identified a new lineage, they can put in a request.”
The deluge of data has continued. This past spring, the team held a “pangothon,” a sort of hackathon in which they sorted 800,000 sequences into around 1,200 lineages.
“We gave ourselves three solid days,” says O’Toole. “It took two weeks.”
Since then, the Pango team has recruited a few more volunteers, like UCSC researcher Hindriks and Yale researcher Brito, who both got involved initially by adding their two cents on Twitter and the GitHub page. A postdoc at the University of Cambridge, Chris Ruis, has turned his attention to helping O’Toole clear out the backlog of GitHub requests.
O’Toole recently asked them to formally join the organization as part of the newly created Pango Network Lineage Designation Committee, which discusses and makes decisions about variant names. Another committee, which includes lab leader Rambaut, makes higher-level decisions.
“We’ve got a website, and an email that’s not just my email,” O’Toole says. “It’s become a lot more formalized, and I think that will really help it scale.”
A few cracks around the edges have started to show as the data has grown. As of today, there are nearly 2.5 million covid sequences in GISAID, which the Pango team has split into 1,300 branches. Each branch corresponds to a variant. Of those, eight are ones to watch, according to the WHO.
With so much to process, the software is starting to buckle. Things are getting mislabeled. Many strains look similar, because the virus evolves the most advantageous mutations over and over again.
As a stopgap measure, the team has built new software that uses a different sorting method and can catch things that Pango may miss.
Disability rights advocates are worried about discrimination in AI hiring tools
Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures don’t unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.
AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a company’s previous hires won’t reflect their potential.
Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.
“As we automate these systems, and employers push to what’s fastest and most efficient, they’re losing the chance for people to actually show their qualifications and their ability to do the job,” Givens says. “And that is a huge loss.”
A hands-off approach
Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agency’s authority to investigate whether these tools discriminate, particularly against those with disabilities.
The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industry’s hesitance to share data and said that variation between different companies’ software would prevent the EEOC from instituting any broad policies.
“I was surprised and disappointed when I saw the response,” says Roland Behm, a lawyer and advocate for people with behavioral health issues. “The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.”
The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates don’t know why they were rejected for the job. “I believe a reason that we haven’t seen more enforcement action or private litigation in this area is due to the fact that candidates don’t know that they’re being graded or assessed by a computer,” says Keith Sonderling, an EEOC commissioner.
Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.