Staying safer while recording police activity requires different tactics depending on the situation. Bystanders witnessing police violence in a public space should keep a distance, Kelley-Chung advises—that way you can’t be accused of being a participant. If you get pulled over? Get a passenger to start filming right away, before the officer approaches your window (reaching into your pocket for your phone can also be extremely dangerous, particularly for people of color). If it’s legal in your area, a dash cam might be an alternative, Wandt suggests.
As much as a cell-phone camera offers protection, Wandt says, it’s also important to keep in mind that “once somebody takes out a camera and starts filming an arrest, it absolutely changes the nature of the situation for everybody, from the victim to the suspect to the police officer.”
“There’s the law, there’s the Constitution, and then there’s what you do when you’re face to face with the police,” says Sykes, the ACLU attorney. Figuring out exactly how much to push back against a police officer who is giving an unlawful order is “tough,” he says, especially in certain circumstances—for example, at a protest.
“There is a special flavor of risk when you’re protesting the police and the police are armed and standing feet away from you,” Sykes says.
On-the-ground experience is really the only way to read whether a situation at a protest is safe. But one thing Kelley-Chung has observed is that the presence of a camera filming an officer can protect others from misconduct.
“When you see people in a verbal dispute with police, get as close as possible,” he says. “That camera can be more protection than a tactical vest.”
In any situation, everyone we spoke to had the same caveats: Do not interfere in police operations. Comply when police tell you that you need to move, but you do not have to stop filming from a new location, even if they claim you must, as long as you are recording an officer in a public space carrying out their duties.
Cop watchers generally advise others to collect identifying information on police at the scene, and to note the time and location. You could ask for a badge number; Parriott says most officers actually just carry business cards.
A mine of misinformation
No single video is going to change how police act, and experts argue that even large numbers of videos cannot change the culture of many police departments. On the contrary, police have found ways to use video, especially body camera footage, to reinforce and control their own narrative in cases of possible violence or misconduct.
People like to think that video is simply a neutral tool for capturing information, says Jennifer Grygiel, an assistant professor of communications at Syracuse University—but it’s not, and how it’s released, and in what context, needs additional vetting.
“They get to set the narrative when it’s released, which controls the initial public sentiment around it and opinion. They also push it out on their social media, and their accounts are just like everybody else’s in that they grow their audience. So then they get people following them there because they’re the first to publish information,” Grygiel says. Their own research deals with how police departments use social media to bypass fact-checking by journalists: it started after they noticed how police were pushing out mugshots on local Facebook pages. “People were going in there, like an old public square, and harassing people who had been arrested,” Grygiel says.
As police become better at producing their own media, finding an audience outside of journalism, and making the most of accountability measures like body cameras, Grygiel argues, independent documentation of police officers working in public can serve as a counter to that messaging. Sometimes, as was in the case with the Floyd murder, that documentation happens spontaneously, and often amid great distress, when clear instances of police violence or misconduct are unfolding in real time.
But the capacity for police and police-affiliated organizations to spread misinformation was obvious during the protests in the summer of 2020, when police departments repeatedly promoted inaccurate information. Some of that misinformation went viral, aided by sympathetic media coverage and the right-wing internet, hell-bent on reinforcing the belief that anti-racism protests are merely a conduit for a violent war on cops.
Police unions promoted an alarming claim that Shake Shack employees had “intentionally poisoned” a group of police officers in Manhattan. The story had been dispelled by the next morning: NYPD investigators said the foul-tasting substance in the three officers’ milkshakes wasn’t “bleach,” as the unions speculated, and it wasn’t added to the drinks on purpose. Although the Police Benevolent Association and the Detectives’ Endowment Association both eventually deleted their tweets making the accusation, they had tens of thousands of retweets, and triggered a wave of credulous coverage in conservative and mainstream press. Media write-ups about the tweets got tens of thousands of shares on Facebook and continued to circulate even after the story was debunked.
And this was just one example. Last summer, NYPD Commissioner Dermot Shea reposted a video of police removing bins of bricks from a South Brooklyn sidewalk, claiming they were the work of “organized looters” offering protesters materials to use for violence, despite little evidence that this was actually true. The NYPD also circulated an alert to officers with images of coffee cups filled with concrete, which closely resemble concrete samples used on construction sites. In Columbus, Ohio, the police tweeted out a photo of a colorful bus that they said was supplying dangerous equipment to “rioters,” fueling already rampant national rumors of “antifa buses” descending on cities. In fact, the bus belonged to a group of circus performers, who said the equipment police cited as riot supplies included juggling clubs and kitchen utensils.
In short, police still lie despite being watched more closely than ever. There are hundreds of videos of police misconduct at the summer protests alone, some from the body cams introduced in reforms meant to hold them more accountable. But Kelley-Chung thinks there’s only so much difference any one video can make.
“I’ve seen people filming officers with their cameras out in the moment and then get tackled by police,” he says. “They know they’re on camera … and yet they still continue to abuse.”
And even after he reached his settlement with the DC police, there’s an aspect of that day he can’t stop thinking about. Kelley-Chung is Black, and his filming partner, Andrew Jasiura, is white. They were both dressed in the same T-shirt, carrying the same sort of camera equipment. Officers saw Jasiura too: “They pulled him out so they could talk to him,” says Kelley-Chung.
That’s when Jasiura told police that his partner was a journalist too. They continued to arrest him anyway.
A nonprofit promised to preserve wildlife. Then it made millions claiming it could cut down trees
Clegern said the program’s safeguards prevent the problems identified by CarbonPlan.
California’s offsets are considered additional carbon reductions because the floor serves “as a conservative backstop,” Clegern said. Without it, he explained, many landowners could have logged to even lower levels in the absence of offsets.
Clegern added that the agency’s rules were adopted as a result of a lengthy process of debate and were upheld by the courts. A California Court of Appeal found the Air Resources Board had the discretion to use a standardized approach to evaluate whether projects were additional.
But the court did not make an independent determination about the effectiveness of the standard, and was “quite deferential to the agency’s judgment,” said Alice Kaswan, a law professor at the University of San Francisco School of Law, in an email.
California law requires the state’s cap-and-trade regulations to ensure that emissions reductions are “real, permanent, quantifiable, verifiable” and “in addition to any other greenhouse gas emission reduction that otherwise would occur.”
“If there’s new scientific information that suggests serious questions about the integrity of offsets, then, arguably, CARB has an ongoing duty to consider that information and revise their protocols accordingly,” Kaswan said. “The agency’s obligation is to implement the law, and the law requires additionality.”
On an early spring day, Lautzenheiser, the Audubon scientist, brought a reporter to a forest protected by the offset project. The trees here were mainly tall white pines mixed with hemlocks, maples and oaks. Lautzenheiser is usually the only human in this part of the woods, where he spends hours looking for rare plants or surveying stream salamanders.
The nonprofit’s planning documents acknowledge that the forests enrolled in California’s program were protected long before they began generating offsets: “A majority of the project area has been conserved and designated as high conservation value forest for many years with deliberate management focused on long-term natural resource conservation values.”
Meet Jennifer Daniel, the woman who decides what emoji we get to use
Emoji are now part of our language. If you’re like most people, you pepper your texts, Instagram posts, and TikTok videos with various little images to augment your words—maybe the syringe with a bit of blood dripping from it when you got your vaccination, the prayer (or high-fiving?) hands as a shortcut to “thank you,” a rosy-cheeked smiley face with jazz hands for a covid-safe hug from afar. Today’s emoji catalogue includes nearly 3,000 illustrations representing everything from emotions to food, natural phenomena, flags, and people at various stages of life.
Behind all those symbols is the Unicode Consortium, a nonprofit group of hardware and software companies aiming to make text and emoji readable and accessible to everyone. Part of their goal is to make languages look the same on all devices; a Japanese character should be typographically consistent across all media, for example. But Unicode is probably best known for being the gatekeeper of emoji: releasing them, standardizing them, and approving or rejecting new ones.
Jennifer Daniel is the first woman at the helm of the Emoji Subcommittee for the Unicode Consortium and a fierce advocate for inclusive, thoughtful emoji. She initially rose to prominence for introducing Mx. Claus, a gender-inclusive alternative to Santa and Mrs. Claus; a non-gendered person breastfeeding a non-gendered baby; and a masculine face wearing a bridal veil.
Now she’s on a mission to bring emoji to a post-pandemic future in which they are as broadly representative as possible. That means taking on an increasingly public role, whether it’s with her popular and delightfully nerdy Substack newsletter, What Would Jennifer Do? (in which she analyzes the design process for upcoming emoji), or inviting the general public to submit concerns about emoji and speak up if they aren’t representative or accurate.
“There isn’t a precedent here,” Daniel says of her job. And to Daniel, that’s exciting not just for her but for the future of human communication.
I spoke to her about how she sees her role and the future of emoji. The interview has been lightly edited and condensed.
What does it mean to chair the subcommittee on emoji? What do you do?
It’s not sexy. [laughs] A lot of it is managing volunteers [the committee is composed of volunteers who review applications and help in approval and design]. There’s a lot of paperwork. A lot of meetings. We meet twice a week.
I read a lot and talk to a lot of people. I recently talked to a gesture linguist to learn how people use their hands in different cultures. How do we make better hand-gesture emoji? If the image is no good or isn’t clear, it’s a dealbreaker. I’m constantly doing lots of research and consulting with different experts. I’ll be on the phone with a botanical garden about flowers, or a whale expert to get the whale emoji right, or a cardiovascular surgeon so we have the anatomy of the heart down.
There’s an old essay by Beatrice Warde about typography. She asked if a good typeface is a bedazzled crystal goblet or a transparent one. Some would say the ornate one because it’s so fancy, and others would say the crystal goblet because you can see and appreciate the wine. With emoji, I lend myself more to the “transparent crystal goblet” philosophy.
Why should we care about how our emoji are designed?
My understanding is that 80% of communication is nonverbal. There’s a parallel in how we communicate. We text how we talk. It’s informal, it’s loose. You’re pausing to take a breath. Emoji are shared alongside words.
When emoji first came around, we had the misconception that they were ruining language. Learning a new language is really hard, and emoji is kind of like a new language. It works with how you already communicate. It evolves as you evolve. How you communicate and present yourself evolves, just like yourself. You can look at the nearly 3,000 emoji and it [their interpretation] changes by age or gender or geographic area. When we talk to someone and are making eye contact, you shift your body language, and that’s an emotional contagion. It builds empathy and connection. It gives you permission to reveal that about yourself. Emoji can do that, all in an image.
Product design gets an AI makeover
It’s a tall order, but one that Zapf says artificial intelligence (AI) technology can support by capturing the right data and guiding engineers through product design and development.
No wonder a November 2020 McKinsey survey reveals that more than half of organizations have adopted AI in at least one function, and 22% of respondents report at least 5% of their companywide earnings are attributable to AI. And in manufacturing, 71% of respondents have seen a 5% or more increase in revenue with AI adoption.
But that wasn’t always the case. Once “rarely used in product development,” AI has experienced an evolution over the past few years, Zapf says. Today, tech giants known for their innovations in AI, such as Google, IBM, and Amazon, “have set new standards for the use of AI in other processes,” such as engineering.
“AI is a promising and exploratory area that can significantly improve user experience for designing engineers, as well as gather relevant data in the development process for specific applications,” says Katrien Wyckaert, director of industry solutions for Siemens Industry Software.
The result is a growing appreciation for a technology that promises to simplify complex systems, get products to market faster, and drive product innovation.
Simplifying complex systems
A perfect example of AI’s power to overhaul product development is Renault. In response to increasing consumer demand, the French automaker is equipping a growing number of new vehicle models with an automated manual transmission (AMT)—a system that behaves like an automatic transmission but allows drivers to shift gears electronically using a push-button command.
AMTs are popular among consumers, but designing them can present formidable challenges. That’s because an AMT’s performance depends on the operation of three distinct subsystems: an electro-mechanical actuator that shifts the gears, electronic sensors that monitor vehicle status, and software embedded in the transmission control unit, which controls the engine. Because of this complexity, it can take up to a year of extensive trial and error to define the system’s functional requirements, design the actuator mechanics, develop the necessary software, and validate the overall system.
In an effort to streamline its AMT development process, Renault turned to Simcenter Amesim software from Siemens Digital Industries Software. The simulation technology relies on artificial neural networks, AI “learning” systems loosely modeled on the human brain. Engineers simply drag, drop, and connect icons to graphically create a model. When displayed on a screen as a sketch, the model illustrates the relationship between all the various elements of an AMT system. In turn, engineers can predict the behavior and performance of the AMT and make any necessary refinements early in the development cycle, avoiding late-stage problems and delays. In fact, by using a virtual engine and transmissions as stand-ins while developing hardware, Renault has managed to cut its AMT development time almost in half.
Speed without sacrificing quality
So, too, are emerging environmental standards prompting Renault to rely more heavily on AI. To comply with emerging carbon dioxide emissions standards, Renault has been working on the design and development of hybrid vehicles. But hybrid engines are far more complex to develop than those found in vehicles with a single energy source, such as a conventional car. That’s because hybrid engines require engineers to perform complex feats like balancing the power required from multiple energy sources, choosing from a multitude of architectures, and examining the impact of transmissions and cooling systems on a vehicle’s energy performance.
“To meet new environmental standards for a hybrid engine, we must completely rethink the architecture of gasoline engines,” says Vincent Talon, head of simulation at Renault. The problem, he adds, is that carefully examining “the dozens of different actuators that can influence the final results of fuel consumption and pollutant emissions” is a lengthy and complex process, made by more difficult by rigid timelines.
“Today, we clearly don’t have the time to painstakingly evaluate various hybrid powertrain architectures,” says Talon. “Rather, we needed to use an advanced methodology to manage this new complexity.”
For more on AI in industrial applications, visit www.siemens.com/artificialintelligence.
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.