Connect with us


Podcast: when your face is your ticket, your face is your ticket, your face could be your ticket



Podcast: when your face is your ticket, your face is your ticket, your face could be your ticket

In part-three of this latest series, Jennifer Strong and the team at MIT Technology Review jump on the court to unpack just how much things are changing. 

We meet:

  • Donnie Scott, senior vice president of public security, IDEMIA
  • Michael D’Auria, vice president of business development, Second Spectrum
  • Jason Gay, sports columnist, The Wall Street Journal
  • Rachel Goodger, director of business development, Fancam
  • Rich Wang, director of analytics and fan engagement, Minnesota Vikings


This episode was reported and produced by Jennifer Strong, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield. 


 [TR ID]

Strong: I’m in Queens in the neighborhood near a massive stadium complex called Citi Field. It’s home to the New York Mets, though because it’s baseball’s offseason. Right now, everything is locked up and all you can really hear is rush hour traffic.

But if you look up, along the edge of the stadium where thousands of fans will, eventually, return, you can see some of the hardware that powers the team’s use of face recognition. These cameras are meant to detect faces that have been banned from the grounds–folks like ticket scalpers, people who’ve run onto the field, even committed crimes out in the parking lot and that system is powered by one of the biggest names in face recognition – N-E-C. It’s able to measure things like ears — and it still works with people wearing masks, hats and sunglasses.

And then once you get over to the turnstiles – there’s another face system from a company that’s known for airport security – called Clear – and that’s for ticketless entry. Basically you can use your face as a ticket. When you get inside there’s a payments system in a concessions area – meaning you can buy a beer with your face, if you wish.

But it’s when you get to your seat that things get really interesting. Even before the pandemic, attendance at baseball games has been on the decline. Actually, this stadium has about 15-thousand fewer seats in it than the one it replaced. And so, on the one hand, stadiums are trying to make the experience just as safe and hassle-free as they possibly can but they’re also trying to learn just as much as they can about who these people are in the stands and that too is being done with face recognition. I’m Jennifer Strong, and in this latest episode of our mini-series, we look at how this and other tracking systems are changing the sports experience in the stands and on the court.


[Sound from Chicago White Sox at Milwaukee Brewers (Anchor): Ok we are back to playing ball. Two out. 1st inning. No score. And the batter will be Harold Baines with a 7-game hitting streak…]

[Sound from Chicago White Sox at Milwaukee Brewers: crowd cheers]

Strong: For decades, crowding around the TV or radio was the go-to way to consume sports.  Oftentimes, that meant tuning in for hours like this 1984 Major League Baseball game between the Chicago White Sox and the Milwaukee Brewers.

[Sound from Chicago White Sox at Milwaukee Brewers (Anchor): That’s deep in the center field. Going back.. It could be out of here. Manning looks up. It’s outta here! A home run for Harold Baines. The Soxs win 7-6 in the longest game in American league history.]

Strong: The game lasted eight hours and six minutes. And it had to be completed over two days. But, sports watching today looks pretty different.  Human attention spans are measured in seconds and they’re shrinking. Millions of people still tune in to watch but about a third stream them on mobile devices. And of those who still watch on television, 80 percent of them do so while using a second device to search stats, live scores, message other fans, and watch related videos. The segment of fans who attend games in-person are now seen as high-value customers. And that’s another place where face ID comes in. 

[Sound from CNBC newscast (Anchor): And if you were angered over Facebook invading your privacy, you may not want to attend a major sporting event.]

[Sound from CNBC newscast (Eric Chemi): New high tech cameras can now snap a high-rez photo of every person, in every seat, every minute of the game.]

Strong: Face data collected in stadiums by companies like Fancam is now being used to get insights on fan demographics like age, gender and race. Panoramic cameras are able to capture images in such fine detail, that you can zoom in (from a birds-eye view of a stadium) into the stands, onto an individual person, and still be able to make out nuances like a smile, the writing on their shirt, even the texture of their jacket. And now you can also quickly calculate the percentage of people wearing masks – Like in the case of the NFL’s Minnesota Vikings.

Wang: This is new for everybody. We’re still trying to work out exactly how we enforce these mask rules and how to monitor them and track them.

Strong: Rich Wang is their director of analytics & fan engagement. He’s on a Zoom call showcasing how they use computer vision. 

Wang: Also, if you look at this graph. The lowest point is that 87% of people who have their mask on at most of the time and in most of the game. People are you know behaving and enforcing the mask rule. So those are really positive storylines that will continue to support our case of increasing fans

Goodger: Being able to utilize these stats to reopen venues and get fans back into the stadium. And then just as a safeguard as well, once fans are back in the stadium using some of these metrics in addition to the mask usage, also being able to utilize the information of section capacity. 

Strong: And this is Rachel Goodger, the director of business development at Fancam.

Goodger: So, obviously fans have a seat assigned to them when they go back into the stadium and fans are socially distanced. But what happens if fans start to move around the stadium, and one section becomes over capacity. You know, in real time us to able to notify staff and for them being able to see that information and say, ok well, we need to go break up this section a little bit. And then for teams being able to look back after every single game and say “wow we did a great job today.” Or “wow we really need to work more on mask usage in the lower goal or upper goal of this section” and things like that. I think it is data that is going to be very important for not only, as I mentioned, reopening these stadiums but keeping them open in the future. 

Strong: The company sells data back to the sports teams who use it to advance their marketing, affecting everything from what music is played at stadiums to what ads people see during and even after the game has ended.

Scott: You’re gonna start to see the data that you’re willing to share more broadly coupled with the technology used for identification to make things more predictable.

Strong: Donnie Scott is the senior vice president of public security at IDEMIA. It designs AI-driven identity and security solutions to all kinds of businesses.  

Scott: And that would be everything from a digital driver’s license on your phone to a physical license, to a credit card, to an electronic payment mechanism.

Strong: They also make biometric technologies that recognize faces, fingerprints or eyes which can be used to verify identity in sports stadiums or other places like airports and theaters.

Scott: So, we would essentially embed the technology in their loyalty program but we’d add to it, the ability to link either their biometrics – face, fingerprint, iris in some countries that prefer it because of face coverings and other things, or their mobile device where you could authoritatively share your biometric information, or the fact that you’re a season ticket holder, with a piece of equipment at the venue. And therefore, you know, when you show up, they know, okay, Jennifer has tickets to this game. They’re valid at this date. She can pass through the gate.

Strong: Their goal? Is to be invisible. Identity data is captured by cameras concealed as appears to be a normal turnstile. It’s all about creating what’s known as a frictionless experience.

Scott: So particularly around theme parks, um, but the same with stadiums and other concert venues, the technology is evolving from being a device  that kind of stands out to being part of the normal flow and cue of  the venue itself. 

Strong: We already unlock smartphones with our eyes, fingers and face and that got us used to this idea of biometrics in our daily lives. Scott thinks that may be why the response to these services has been mostly positive. 

Scott: You know, I’ve watched my kids grow up with  first opening an Apple device with their thumb print, then moving on that they felt they were very mistreated because they couldn’t unlock it with their face. And we’ve all become, you know, the last 15 years, 10 years, desensitized to the weirdness of it. I think most of society is focused on how it makes my life easier.

Strong: And in a world where confirming your identity is as easy as unlocking a phone, your biometric data could become more important than a passport, car keys or any other physical item we carry with us.

Scott: I think people are going to become really accustomed to the technology being there, how to use it, how to interact with it and what to expect from it because I think we’re going to see it in all walks of life. We’re going to see it when we travel. We’re going to see it when we do business with our government. We’re going to see it when we do business in grocery stores in you know sports and concert venues and music parks as well. So it’s going to become such a standard way of life that the access part will become a de facto normal. And then it’s what happens next.

Strong: And what happens next could mean more personalized experiences. 

Scott: I think that the next thing to come is going to be, to enable the fan experience. But after that, it becomes, how does the fan experience fit in your life? And, you know, that is a concept that is pretty big and broad, but one that once the first two pieces are enabled through technology and enabled through an acceptance by the user themselves are only natural things that come with an improved, mature use of a technology. You could think of an amusement park, head or character where kids could walk up to their favorite character and be recognized for who they are and have a custom experience specific to them. 

Strong: Which is likely to happen at scale. 

Scott: You could see a future where as you arrived to the airport or as you arrive to the sporting event, and it directs you to your parking based on recognizing your car or on sharing who you are from your phone with the airport operator or the airline or the TSA themselves.  You would have an, you know, a known time to gate, right. Which is the ideal state where it says I’ve got a five o’clock flight today based on the wait times that are predicted and where we are. I know that it’s going to take me 12 minutes to get from the front of the airport  through the checkpoint to the gate. And you’re going to have directions along the way, the same experience is going to happen for  sports venues and for concert venues, where from parking, you’re going to be directed through the shortest line, you know, that line’s going to move quickly because it’s biometrically enabled, and then it’s going to be able to guide you to where can I get my concessions that I want, how long do I have to, before I have to start walking, so I can be in my seat before it kicks off, I think those types of secondary benefits are going to come pretty quickly as the, as the venues get instrumented, to be able to recognize and identify folks.

D’Auria: I think there’s a huge opportunity to make the kind of sports fan experience, more engaging, more potent. And I just think where we’re at the early days of that. I’m Mike D’Auria and I’m the vice president of business development at Second Spectrum.

Strong: The company provides tracking data and analytics software for professional sports leagues like the NBA and Major League Soccer. A series of cameras no bigger than your standard security camera, provide unprecedented machine understanding of every game.

D’Auria: The kind of core of this technology is computer vision that runs on top of these camera feeds. And what this is intended to do is track the movement of every player and the ball 25 times a second. So you can kind of think over the course of one umm typical NBA basketball game, you’re able to capture millions of data points that didn’t exist before and use those to kind of, build a suite of products or experiences on top of that can really change the way that we see and interact with sports.

Strong: Those data points are rapidly analyzed with AI, which can spit out predictions such as the likelihood a player will sink a three-pointer—while the play is still in progress. It’s also using this data to deliver a more personalized, interactive viewing experience for fans watching remotely. 

D’Auria: In this last NBA finals, we ran what we call video augmentation essentially real time on top of the game. And so what you could do there is for example, take that shot probability model. And while the game is being played, you could integrate into 3D space in the video, a shot probability bubble over every offensive player’s head that updates in real time.  We can diagram the play that’s being run as it’s unfolding. So if you’re trying to learn about the game a little bit, you can kind of, you know, have a bit of a tutorial or what would it feel like to have a coach sitting next to you. You know, Or if you just want to have fun or kind of game-ify this a little bit, you know, every time somebody dunks the ball, you can see a lightning strikeon the back board. And so each of those experiences might not be right for everybody, but I think we will move to a world where live sports can be really personalized to the way you want to view it.

Strong: And access to troves of data has transformed how coaches train their players. 

D’Auria: So if you kind of step back and think about the way data has traditionally been captured in sports, you would have people either sitting in the stands or watching the game on TV and kind of manually coding. That was a shot. That was a pass. That was a pick and roll action. And so from this kind of underlying tracking dataset you can apply machine learning to kind of automate that whole process.

Strong: That automation allows for all that data to be matched to game film. Coaches, general managers, and analysts can then sift through it with a software tool that functions like a search engine.  

D’Auria: And so for folks who work on an NBA team, you can ask very complicated questions or make very kind of detailed queries about the game. And with a few keystrokes, a few clicks your mouse, you can get a very precise answer in data visualization and a automatically generated playlist of, you know, for example, if I wanted to look at,  Anthony Davis, LeBron James, pick and roll from the right wing where the defense ices and Anthony Davis rolls and somebody tags him from the weak side. And so LeBron James takes a jump shot and makes it. You know, you can get the very precise set of every time that combination has happened in the course of these guys NBA careers in a matter of seconds, and then kind of use that for your coaching purposes. And now, uh, someone at a team level can spend their time saying, well, I have this video or this information, how can I help a coach implement that into his game plan? Or how can I help my players kind of learn something new on the court? And so it kind of shifts their workflow to teaching and implementation versus kind of, you know, data gathering and manual labor.

Strong: And he says, over the next couple of years, the roles of these machines in the game could shift from assistant coach to assistant referee—adding context and nuance to difficult calls.

D’Auria: I mean, we’ve seen this already in some other places where we work.  So we’ll kind of give the soccer example of you now have technology that will help with the goal, no goal call, right? You see this in tennis with computer systems being used to kind of judge, if a ball is  over line or, you know, inbounds or out of bounds and be able to do this with  precision that’s quite frankly, better than what a line judge could do or  a referee who might have a really difficult angle to see if like literally every millimeter of the ball went over. You’re starting to see this with the offsides line in soccer as well.  And so I think generally the first place this happens is to basically, um, you know, augment or assist a referee’s capabilities. So you can kind of think about providing a referee and additional data source or, you know, an additional validation of one of their decisions.

Strong: Because the system can already identify players from their jerseys, Second Spectrum doesn’t need to use facial mapping or recognition. But it is useful for analytics. And that’s not just specific to capturing faces. Right now, players appear in the system as dots on a map. And as their camera systems improve those dots could transform into full skeletons. Extra detail like real-time elbow angle could help with even more accurate shot predictions. Though, not everyone is onboard.

Gay: You know, a sport that I follow and find fascinating is bike racing and bike racing is a sport that is actually in a long conversation about. Removing technology. 

Strong: Jason Gay is a sports columnist for The Wall Street Journal.

Gay: Technology now in cycling can say, okay, if you want to win this race or catch up to this person, you have to put out X amount of effort for X amount of minutes. And you actually have this data right on an onboard computer, on a bicycle in front of you telling you exactly what to do. Now. That’s like an amazing thing. However, it’s also not terribly human, right? It seems to be somewhat clinical and it’s created what many people feel is a little bit of a dry style of racing where people are data-driven and they’re using their heads too much, as opposed to their hearts. The French have an expression of panache. They love to see races won with panache, which basically means our gut instinct. And so there’s been conversations about, well, what if we take away these computers from riders and make them, you know, use their heads in their hearts to cycle. Now there’s a safety consideration here that’s concurrent with this, right? You want to actually have that information creates a safer experience for a rider oftentimes, but it is fascinating that the tech has gotten so good in certain instances, in terms of maximizing effort or telling an athlete, what effort is required, that they’re starting to draw back from that.

Strong: And for sports embracing this tech, It’s changing how the game is played. 

Gay: Here’s an example from baseball and we see quite often a manager will come to a mound and remove a pitcher from a game, even though the pitcher is pitching very, very well that day, the reason they remove them is that the data shows that this pitcher tends to break down at a certain point. It’s almost like a car tire or something. And they’re just saying, well this pitcher at this point of the game historically is going to stop performing at the high level we need him to. So we’re going to make that move. We’re removing sort of the gut of saying there oh well he’s rolling today, let’s just let them go. They’re relying on the numbers.

Strong: Data driven game strategies are also changing how teams recruit. Like in basketball, where players who can execute a three-point shot (once considered a gimmick by the NBA) are now deemed extremely valuable. 

Gay: The reason is that basketball teams by looking at their numbers discovered that a three-point shot is a more efficient shot. You’d rather take that three-point shot than certainly take a longer two point jump shot. And so you prioritize the three pointer in an offense. The most extreme example of this – the Houston Rockets, where you have a perennial MVP candidate in James Harden who oftentimes is taking three pointer after three pointer in a game, because it’s an efficient way for them to play.

[Sound of Houston Rockets at Los Angeles Clippers (Announcers): Harden, nobody near him, sets all the time and nails the three-pointer! Steps back, open three, got it! James Harden steps back puts up a three, It goes, bounces and drops through!]

Strong: Technology is also playing assistant coach in places like the locker room of The Dallas Mavericks. 

[Sound from video of Marc Cuban at Dallas Mavericks (Cuban): What will happen is when a player walks in, or anybody walks in, we’ll have facial recognition. It’ll take a picture of you and it will say ‘ok here comes Marc or here comes Dirk’]

Strong: Marc Cuban is their owner.

[Sound from video of Marc Cuban at Dallas Mavericks (Cuban): And for any of the players or any of the staff, it’ll put coaches notes: here’s what you’re expected to do and tell you what’s going on. For anybody we don’t know it’s going to be ehh-ehh-ehh get the heck out.”]

Strong: And it’s not just basketball. Using AI to find the most efficient pattern of play is growing across all sports. And there’s a role for face ID too. That same face-mapping that sees when you’re looking directly at your phone to unlock it could also help coaches see what players are focusing on during the game.

Gay: I mean, that’s an incredibly integral thing for say a football quarterback. If you could somehow be able to render what a football quarterback is looking at or more importantly not looking at, not seeing downfield. Well, you could see, you know, immediate utility for any quarterback, any football team. But it also applies to a point guard or, you know, somebody playing left tackle or somebody catching on a baseball team. There are numerous plays that if you’re able to sort of look at what an athlete is seeing on the court or not seeing again, which is probably the more essential thing, that would have enormous consequences. 

Strong: Next episode, we wrap up our miniseries with a look at how face mapping is transforming the shopping experience. And spoiler alert – it goes way beyond just identifying who’s in the store  

Guive Balooch: In order to really virtually be able to try on with augmented reality makeup, you need to detect where the eye is and where the eyebrow is. And, um, it has to be at a level of accuracy that when the product’s on there, it doesn’t look like it’s not exactly on your lip   and people’s lips are, can vary in shape, the color between your skin tone and your lip, can also be very different. And so you need to have an algorithm that can detect it and make sure it works.

Strong: This episode was reported and produced by me, Anthony Green, Tate Ryan-Mosley, Emma Cillekens and Karen Hao. We’re edited by Michael Reilly and Gideon Lichfield. Thanks for listening, I’m Jennifer Strong. 



Meet Jennifer Daniel, the woman who decides what emoji we get to use



Meet Jennifer Daniel, the woman who decides what emoji we get to use

Emoji are now part of our language. If you’re like most people, you pepper your texts, Instagram posts, and TikTok videos with various little images to augment your words—maybe the syringe with a bit of blood dripping from it when you got your vaccination, the prayer (or high-fiving?) hands as a shortcut to “thank you,” a rosy-cheeked smiley face with jazz hands for a covid-safe hug from afar. Today’s emoji catalogue includes nearly 3,000 illustrations representing everything from emotions to food, natural phenomena, flags, and people at various stages of life.

Behind all those symbols is the Unicode Consortium, a nonprofit group of hardware and software companies aiming to make text and emoji readable and accessible to everyone. Part of their goal is to make languages look the same on all devices; a Japanese character should be typographically consistent across all media, for example. But Unicode is probably best known for being the gatekeeper of emoji: releasing them, standardizing them, and approving or rejecting new ones.

Jennifer Daniel is the first woman at the helm of the Emoji Subcommittee for the Unicode Consortium and a fierce advocate for inclusive, thoughtful emoji. She initially rose to prominence for introducing Mx. Claus, a gender-inclusive alternative to Santa and Mrs. Claus; a non-gendered person breastfeeding a non-gendered baby; and a masculine face wearing a bridal veil. 

Now she’s on a mission to bring emoji to a post-pandemic future in which they are as broadly representative as possible. That means taking on an increasingly public role, whether it’s with her popular and delightfully nerdy Substack newsletter, What Would Jennifer Do? (in which she analyzes the design process for upcoming emoji), or inviting the general public to submit concerns about emoji and speak up if they aren’t representative or accurate.

“There isn’t a precedent here,” Daniel says of her job. And to Daniel, that’s exciting not just for her but for the future of human communication.

I spoke to her about how she sees her role and the future of emoji. The interview has been lightly edited and condensed. 

What does it mean to chair the subcommittee on emoji? What do you do?

It’s not sexy. [laughs] A lot of it is managing volunteers [the committee is composed of volunteers who review applications and help in approval and design]. There’s a lot of paperwork. A lot of meetings. We meet twice a week.

I read a lot and talk to a lot of people. I recently talked to a gesture linguist to learn how people use their hands in different cultures. How do we make better hand-gesture emoji? If the image is no good or isn’t clear, it’s a dealbreaker. I’m constantly doing lots of research and consulting with different experts. I’ll be on the phone with a botanical garden about flowers, or a whale expert to get the whale emoji right, or a cardiovascular surgeon so we have the anatomy of the heart down. 

There’s an old essay by Beatrice Warde about typography. She asked if a good typeface is a bedazzled crystal goblet or a transparent one. Some would say the ornate one because it’s so fancy, and others would say the crystal goblet because you can see and appreciate the wine. With emoji, I lend myself more to the “transparent crystal goblet” philosophy. 

Why should we care about how our emoji are designed?

My understanding is that 80% of communication is nonverbal. There’s a parallel in how we communicate. We text how we talk. It’s informal, it’s loose. You’re pausing to take a breath. Emoji are shared alongside words.

When emoji first came around, we had the misconception that they were ruining language. Learning a new language is really hard, and emoji is kind of like a new language. It works with how you already communicate. It evolves as you evolve. How you communicate and present yourself evolves, just like yourself. You can look at the nearly 3,000 emoji and it [their interpretation] changes by age or gender or geographic area. When we talk to someone and are making eye contact, you shift your body language, and that’s an emotional contagion. It builds empathy and connection. It gives you permission to reveal that about yourself. Emoji can do that, all in an image.

Continue Reading


Product design gets an AI makeover



Product design gets an AI makeover

It’s a tall order, but one that Zapf says artificial intelligence (AI) technology can support by capturing the right data and guiding engineers through product design and development.

No wonder a November 2020 McKinsey survey reveals that more than half of organizations have adopted AI in at least one function, and 22% of respondents report at least 5% of their companywide earnings are attributable to AI. And in manufacturing, 71% of respondents have seen a 5% or more increase in revenue with AI adoption.

But that wasn’t always the case. Once “rarely used in product development,” AI has experienced an evolution over the past few years, Zapf says. Today, tech giants known for their innovations in AI, such as Google, IBM, and Amazon, “have set new standards for the use of AI in other processes,” such as engineering.

“AI is a promising and exploratory area that can significantly improve user experience for designing engineers, as well as gather relevant data in the development process for specific applications,” says Katrien Wyckaert, director of industry solutions for Siemens Industry Software.

The result is a growing appreciation for a technology that promises to simplify complex systems, get products to market faster, and drive product innovation.

Simplifying complex systems

A perfect example of AI’s power to overhaul product development is Renault. In response to increasing consumer demand, the French automaker is equipping a growing number of new vehicle models with an automated manual transmission (AMT)—a system that behaves like an automatic transmission but allows drivers to shift gears electronically using a push-button command.

AMTs are popular among consumers, but designing them can present formidable challenges. That’s because an AMT’s performance depends on the operation of three distinct subsystems: an electro-mechanical actuator that shifts the gears, electronic sensors that monitor vehicle status, and software embedded in the transmission control unit, which controls the engine. Because of this complexity, it can take up to a year of extensive trial and error to define the system’s functional requirements, design the actuator mechanics, develop the necessary software, and validate the overall system.

In an effort to streamline its AMT development process, Renault turned to Simcenter Amesim software from Siemens Digital Industries Software. The simulation technology relies on artificial neural networks, AI “learning” systems loosely modeled on the human brain. Engineers simply drag, drop, and connect icons to graphically create a model. When displayed on a screen as a sketch, the model illustrates the relationship between all the various elements of an AMT system. In turn, engineers can predict the behavior and performance of the AMT and make any necessary refinements early in the development cycle, avoiding late-stage problems and delays. In fact, by using a virtual engine and transmissions as stand-ins while developing hardware, Renault has managed to cut its AMT development time almost in half.

Speed without sacrificing quality

So, too, are emerging environmental standards prompting Renault to rely more heavily on AI. To comply with emerging carbon dioxide emissions standards, Renault has been working on the design and development of hybrid vehicles. But hybrid engines are far more complex to develop than those found in vehicles with a single energy source, such as a conventional car. That’s because hybrid engines require engineers to perform complex feats like balancing the power required from multiple energy sources, choosing from a multitude of architectures, and examining the impact of transmissions and cooling systems on a vehicle’s energy performance.

“To meet new environmental standards for a hybrid engine, we must completely rethink the architecture of gasoline engines,” says Vincent Talon, head of simulation at Renault. The problem, he adds, is that carefully examining “the dozens of different actuators that can influence the final results of fuel consumption and pollutant emissions” is a lengthy and complex process, made by more difficult by rigid timelines.

“Today, we clearly don’t have the time to painstakingly evaluate various hybrid powertrain architectures,” says Talon. “Rather, we needed to use an advanced methodology to manage this new complexity.”

For more on AI in industrial applications, visit

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading


AI consumes a lot of energy. Hackers could make it consume more.



AI consumes a lot of energy. Hackers could make it consume more.

The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation. 

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”

Continue Reading

Copyright © 2020 Diliput News.