The showrunner of Sacred Games, Vikramaditya Motwane, told me that after the furore around that episode, he was told to avoid “anything to do with religion.” Local media outlets reported that the government began seriously considering censoring streaming because of the lynching scene. The news that this might happen ricocheted around the industry.
I traveled to India in late 2019 to see how the country’s nascent streaming industry was faring in its struggles with Hindu nationalism.
Srishti Behl Arya comes from a family of Bollywood filmmakers. Her father, a director and producer, worked with Amitabh Bachchan, a legendary actor. When she was little, she accompanied her parents on location, where she and the other children of the cast and crew pretended to be film stars. “We ran around like psychos,” she told me when I visited her at Netflix’s offices in Bandra-Kurla, a wealthy suburban business district in Mumbai.
In 2018, Netflix hired Arya to commission feature-length content. That year, the company made more than 20 original films and five original series in Hindi. But this did little to alter its public persona. In a country with more than 24 major languages, Netflix was still viewed as an English-language platform for westernized Indians. And this is where Arya, who knew everyone who mattered in Hindi film, fit into the picture. She had worked in advertising, and then as an actor and a writer, before moving on to TV production.
Soon she enlisted many of her childhood friends, who had grown up to become some of the most powerful people in the Hindi film industry, to work for Netflix. She signed on Zoya Akhtar, whose last feature film was India’s official entry to the Academy Awards, to direct a short film. Like Arya, Akhtar comes from a film family, but because Bollywood is a male-dominated industry, it’s still almost impossible for a female filmmaker or female-oriented films to raise capital. By contrast, several women helmed projects at Netflix. The platform’s biggest star is Radhika Apte, a Bollywood actress who has appeared in so many Netflix productions that online wags joke she’s in all of them.
But working with Bollywood meant dealing with its shortcomings. Netflix held several workshops in Mumbai to train Indian content creators. It taught them how to develop a major series, but also helped them brush up on basics such as how to write, schedule, and budget. “That’s how we can add value to the industry,” Arya told me. “By helping it get more organized.”
On my last day in Mumbai, I went to visit Red Chillies Entertainment, a towering production house owned by Shah Rukh Khan, which produces shows for Netflix. Back in 2017, Hastings and Khan had appeared together in a stilted promotional skit announcing a new spy thriller called Bard of Blood.
The foyer was deserted on the day I arrived, except for a beautiful sculpture of Ganesha, a Hindu god who is viewed as the patron of the arts. It was wrapped in plastic to protect it from construction dust. Around it some barefoot workmen were operating power tools without any protective gear. On the fourth floor, an exhausted-looking man with slippers on his feet and salt in his dark hair emerged from an editing studio. Several years ago, newly graduated from the London School of Film, Patrick Graham had been struggling to land projects when a friend suggested he try Bollywood. He floundered at first, stifled by censorship. But then, in 2018, Netflix India gave Graham the budget to produce a fictional series in which Muslims are rounded up in internment camps. They also brought him in to co-write the screenplay for Leila. When we met, he was wrapping up production on Betaal, a four-episode zombie series that would be released the next year. Months earlier, in a conversation on the phone, Graham had seemed pumped at the opportunity. “It’s massive,” he’d said. But in person, in Mumbai, he was downcast. “I have to go through the series and remove anything that might offend,” he told me, gloomily. “The oversensitive people are winning.”
In November 2020, Hindu nationalists went after Netflix again. Mira Nair’s critically acclaimed adaptation of Vikram Seth’s novel A Suitable Boy showed a Muslim boy kissing a Hindu girl. A leader of the BJP’s youth wing filed a police complaint about the series for “shooting kissing scenes under temple premises.” The leader accused the show of promoting “love jihad”—a conspiracy theory that claims Muslim men are seducing Hindu women in order to convert them to Islam.
In January, another group of Hindu nationalists claimed offense, this time over a political drama on Amazon Prime Video called Tandav. They didn’t care for the depiction of an actor dressed as the Hindu god Shiva. The director quickly issued a public apology and deleted some offending scenes. But he was still named in police complaints in six states, along with members of his cast and crew. Prosecutors also charged Aparna Purohit, who heads Indian original programming for Amazon, with forgery, cyber-terrorism, and promoting hatred between classes.
The very next month, the government announced what it called a “soft-touch self-regulatory architecture” for streaming services. This new ethics code, notionally voluntary, comes with ratings and a grievance system that make streaming, in effect, just as tightly regulated as film and TV.
After the new code was announced, Amazon canceled the upcoming season of The Family Man, a planned spy thriller, and the follow-up to Paatal Lok, a crime series. It also announced plans to co-produce its first Indian film—a mythological tale starring Akshay Kumar, an actor who is known for his close ties with Hindu nationalists.
Netflix had entered India just when hundreds of millions of Indians discovered the internet. It helped create a new language for Indian streaming. In 2020, its subscriber base was estimated to have risen to 4.2 million. But whether the company—and streaming services more generally—can ultimately succeed depends in large measure on matters outside of their control.
Kashyap, the director, believes he has a handle on the censorship problem. “We will say what we want to say,” he told me. “We will simply find different ways of saying it.” On March 3, his house and those of several other Bollywood stars were raided by tax authorities in what Nawab Malik, a spokesperson for the opposition Congress Party, described as an intimidation attempt. That same day, Netflix India announced a slate of 40 new films and series.
Meet Jennifer Daniel, the woman who decides what emoji we get to use
Emoji are now part of our language. If you’re like most people, you pepper your texts, Instagram posts, and TikTok videos with various little images to augment your words—maybe the syringe with a bit of blood dripping from it when you got your vaccination, the prayer (or high-fiving?) hands as a shortcut to “thank you,” a rosy-cheeked smiley face with jazz hands for a covid-safe hug from afar. Today’s emoji catalogue includes nearly 3,000 illustrations representing everything from emotions to food, natural phenomena, flags, and people at various stages of life.
Behind all those symbols is the Unicode Consortium, a nonprofit group of hardware and software companies aiming to make text and emoji readable and accessible to everyone. Part of their goal is to make languages look the same on all devices; a Japanese character should be typographically consistent across all media, for example. But Unicode is probably best known for being the gatekeeper of emoji: releasing them, standardizing them, and approving or rejecting new ones.
Jennifer Daniel is the first woman at the helm of the Emoji Subcommittee for the Unicode Consortium and a fierce advocate for inclusive, thoughtful emoji. She initially rose to prominence for introducing Mx. Claus, a gender-inclusive alternative to Santa and Mrs. Claus; a non-gendered person breastfeeding a non-gendered baby; and a masculine face wearing a bridal veil.
Now she’s on a mission to bring emoji to a post-pandemic future in which they are as broadly representative as possible. That means taking on an increasingly public role, whether it’s with her popular and delightfully nerdy Substack newsletter, What Would Jennifer Do? (in which she analyzes the design process for upcoming emoji), or inviting the general public to submit concerns about emoji and speak up if they aren’t representative or accurate.
“There isn’t a precedent here,” Daniel says of her job. And to Daniel, that’s exciting not just for her but for the future of human communication.
I spoke to her about how she sees her role and the future of emoji. The interview has been lightly edited and condensed.
What does it mean to chair the subcommittee on emoji? What do you do?
It’s not sexy. [laughs] A lot of it is managing volunteers [the committee is composed of volunteers who review applications and help in approval and design]. There’s a lot of paperwork. A lot of meetings. We meet twice a week.
I read a lot and talk to a lot of people. I recently talked to a gesture linguist to learn how people use their hands in different cultures. How do we make better hand-gesture emoji? If the image is no good or isn’t clear, it’s a dealbreaker. I’m constantly doing lots of research and consulting with different experts. I’ll be on the phone with a botanical garden about flowers, or a whale expert to get the whale emoji right, or a cardiovascular surgeon so we have the anatomy of the heart down.
There’s an old essay by Beatrice Warde about typography. She asked if a good typeface is a bedazzled crystal goblet or a transparent one. Some would say the ornate one because it’s so fancy, and others would say the crystal goblet because you can see and appreciate the wine. With emoji, I lend myself more to the “transparent crystal goblet” philosophy.
Why should we care about how our emoji are designed?
My understanding is that 80% of communication is nonverbal. There’s a parallel in how we communicate. We text how we talk. It’s informal, it’s loose. You’re pausing to take a breath. Emoji are shared alongside words.
When emoji first came around, we had the misconception that they were ruining language. Learning a new language is really hard, and emoji is kind of like a new language. It works with how you already communicate. It evolves as you evolve. How you communicate and present yourself evolves, just like yourself. You can look at the nearly 3,000 emoji and it [their interpretation] changes by age or gender or geographic area. When we talk to someone and are making eye contact, you shift your body language, and that’s an emotional contagion. It builds empathy and connection. It gives you permission to reveal that about yourself. Emoji can do that, all in an image.
Product design gets an AI makeover
It’s a tall order, but one that Zapf says artificial intelligence (AI) technology can support by capturing the right data and guiding engineers through product design and development.
No wonder a November 2020 McKinsey survey reveals that more than half of organizations have adopted AI in at least one function, and 22% of respondents report at least 5% of their companywide earnings are attributable to AI. And in manufacturing, 71% of respondents have seen a 5% or more increase in revenue with AI adoption.
But that wasn’t always the case. Once “rarely used in product development,” AI has experienced an evolution over the past few years, Zapf says. Today, tech giants known for their innovations in AI, such as Google, IBM, and Amazon, “have set new standards for the use of AI in other processes,” such as engineering.
“AI is a promising and exploratory area that can significantly improve user experience for designing engineers, as well as gather relevant data in the development process for specific applications,” says Katrien Wyckaert, director of industry solutions for Siemens Industry Software.
The result is a growing appreciation for a technology that promises to simplify complex systems, get products to market faster, and drive product innovation.
Simplifying complex systems
A perfect example of AI’s power to overhaul product development is Renault. In response to increasing consumer demand, the French automaker is equipping a growing number of new vehicle models with an automated manual transmission (AMT)—a system that behaves like an automatic transmission but allows drivers to shift gears electronically using a push-button command.
AMTs are popular among consumers, but designing them can present formidable challenges. That’s because an AMT’s performance depends on the operation of three distinct subsystems: an electro-mechanical actuator that shifts the gears, electronic sensors that monitor vehicle status, and software embedded in the transmission control unit, which controls the engine. Because of this complexity, it can take up to a year of extensive trial and error to define the system’s functional requirements, design the actuator mechanics, develop the necessary software, and validate the overall system.
In an effort to streamline its AMT development process, Renault turned to Simcenter Amesim software from Siemens Digital Industries Software. The simulation technology relies on artificial neural networks, AI “learning” systems loosely modeled on the human brain. Engineers simply drag, drop, and connect icons to graphically create a model. When displayed on a screen as a sketch, the model illustrates the relationship between all the various elements of an AMT system. In turn, engineers can predict the behavior and performance of the AMT and make any necessary refinements early in the development cycle, avoiding late-stage problems and delays. In fact, by using a virtual engine and transmissions as stand-ins while developing hardware, Renault has managed to cut its AMT development time almost in half.
Speed without sacrificing quality
So, too, are emerging environmental standards prompting Renault to rely more heavily on AI. To comply with emerging carbon dioxide emissions standards, Renault has been working on the design and development of hybrid vehicles. But hybrid engines are far more complex to develop than those found in vehicles with a single energy source, such as a conventional car. That’s because hybrid engines require engineers to perform complex feats like balancing the power required from multiple energy sources, choosing from a multitude of architectures, and examining the impact of transmissions and cooling systems on a vehicle’s energy performance.
“To meet new environmental standards for a hybrid engine, we must completely rethink the architecture of gasoline engines,” says Vincent Talon, head of simulation at Renault. The problem, he adds, is that carefully examining “the dozens of different actuators that can influence the final results of fuel consumption and pollutant emissions” is a lengthy and complex process, made by more difficult by rigid timelines.
“Today, we clearly don’t have the time to painstakingly evaluate various hybrid powertrain architectures,” says Talon. “Rather, we needed to use an advanced methodology to manage this new complexity.”
For more on AI in industrial applications, visit www.siemens.com/artificialintelligence.
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.
AI consumes a lot of energy. Hackers could make it consume more.
The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation.
When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.
The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”