Connect with us

Tech

The NYPD used Clearview’s controversial facial recognition tool. Here’s what you need to know

Published

on

The NYPD used Clearview's controversial facial recognition tool. Here’s what you need to know


The emails span a period from October 2018 through February 2020, beginning when Clearview AI CEO Hoan Ton-That was introduced to NYPD deputy inspector Chris Flanagan. After initial meetings, Clearview AI entered into a vendor contract with NYPD in December 2018 on a trial basis that lasted until the following March. 

The documents show that many individuals at NYPD had access to Clearview during and after this time, from department leadership to junior officers. Throughout the exchanges, Clearview AI encouraged more use of its services. (“See if you can reach 100 searches,” its onboarding instructions urged officers.) The emails show that trial accounts for the NYPD were created as late as February 2020, almost a year after the trial period was said to have ended. 

We reviewed the emails, and talked to top surveillance and legal experts about their contents. Here’s what you need to know. 

NYPD lied about the extent of its relationship with Clearview AI and the use of its facial recognition technology

The NYPD told BuzzFeed News and the New York Post previously that it had “no institutional relationship” with Clearview AI, “formally or informally.” The department did disclose that it had trialed Clearview AI, but the emails show that the technology was used over a sustained time period by a large number of people who completed a high volume of searches in real investigations.

In one exchange, a detective working in the department’s facial recognition unit said, “App is working great.” In another, an officer on the NYPD’s identity theft squad said that “we continue to receive positive results” and have “gone on to make arrests.” (We have removed full names and email addresses from these images; other personal details were redacted in the original documents.)

Albert Fox Cahn, executive director at the Surveillance Technology Oversight Project, a nonprofit that advocates for the abolition of police use of facial recognition technology in New York City, says the records clearly contradict NYPD’s previous public statements on its use of Clearview AI. 

“Here we have a pattern of officers getting Clearview accounts—not for weeks or months, but over the course of years,” he says. “We have evidence of meetings with officials at the highest level of the NYPD, including the facial identification section. This isn’t a few officers who decide to go off and get a trial account. This was a systematic adoption of Clearview’s facial recognition technology to target New Yorkers.”

Further, NYPD’s description of its facial recognition use, which is required under a recently passed law, says that “investigators compare probe images obtained during investigations with a controlled and limited group of photographs already within possession of the NYPD.” Clearview AI is known for its database of over 3 billion photos scraped from the web. 

NYPD is working closely with immigration enforcement, and officers referred Clearview AI to ICE

The documents contain multiple emails from the NYPD that appear to be referrals to aid Clearview in selling its technology to the Department of Homeland Security. Two police officers had both NYPD and Homeland Security affiliations in their email signature, while another officer identified as a member of a Homeland Security task force.

“There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.”

New York is designated as a sanctuary city, meaning that local law enforcement limits its cooperation with federal immigration agencies. In fact, NYPD’s facial recognition policy statement says that “information is not shared in furtherance of immigration enforcement” and “access will not be given to other agencies for purposes of furthering immigration enforcement.” 

“I think one of the big takeaways is just how lawless and unregulated the interactions and surveillance and data sharing landscape is between local police, federal law enforcement, immigration enforcement,” says Matthew Guariglia, an analyst at the Electronic Frontier Foundation. “There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.” 

Cahn says the emails immediately ring alarm bells, particularly since a great deal of law enforcement information funnels through central systems known as fusion centers.

“You can claim you’re a sanctuary city all you want, but as long as you continue to have these DHS task forces, as long as you continue to have information fusion centers that allow real-time data exchange with DHS, you’re making that promise into a lie.” 

Many officers asked to use Clearview AI on their personal devices or through their personal email accounts 

At least four officers asked for access to Clearview’s app on their personal devices or through personal emails. Department devices are closely regulated, and it can be difficult to download applications to official NYPD mobile phones. Some officers clearly opted to use their personal devices when department phones were too restrictive. 

Clearview replied to this email, “Hi William, you should have a setup email in your inbox shortly.” 

Jonathan McCoy is a digital forensics attorney at Legal Aid Society and took part in filing the freedom of information request. He found the use of personal devices particularly troublesome: “My takeaway is that they were actively trying to circumvent NYPD policies and procedures that state that if you’re going to be using facial recognition technology, you have to go through FIS (facial identification section) and they have to use the technology that’s already been approved by the NYPD wholesale.” NYPD does already have a facial recognition system, provided by a company called Dataworks. 

Tech

Meet Jennifer Daniel, the woman who decides what emoji we get to use

Published

on

Meet Jennifer Daniel, the woman who decides what emoji we get to use


Emoji are now part of our language. If you’re like most people, you pepper your texts, Instagram posts, and TikTok videos with various little images to augment your words—maybe the syringe with a bit of blood dripping from it when you got your vaccination, the prayer (or high-fiving?) hands as a shortcut to “thank you,” a rosy-cheeked smiley face with jazz hands for a covid-safe hug from afar. Today’s emoji catalogue includes nearly 3,000 illustrations representing everything from emotions to food, natural phenomena, flags, and people at various stages of life.

Behind all those symbols is the Unicode Consortium, a nonprofit group of hardware and software companies aiming to make text and emoji readable and accessible to everyone. Part of their goal is to make languages look the same on all devices; a Japanese character should be typographically consistent across all media, for example. But Unicode is probably best known for being the gatekeeper of emoji: releasing them, standardizing them, and approving or rejecting new ones.

Jennifer Daniel is the first woman at the helm of the Emoji Subcommittee for the Unicode Consortium and a fierce advocate for inclusive, thoughtful emoji. She initially rose to prominence for introducing Mx. Claus, a gender-inclusive alternative to Santa and Mrs. Claus; a non-gendered person breastfeeding a non-gendered baby; and a masculine face wearing a bridal veil. 

Now she’s on a mission to bring emoji to a post-pandemic future in which they are as broadly representative as possible. That means taking on an increasingly public role, whether it’s with her popular and delightfully nerdy Substack newsletter, What Would Jennifer Do? (in which she analyzes the design process for upcoming emoji), or inviting the general public to submit concerns about emoji and speak up if they aren’t representative or accurate.

“There isn’t a precedent here,” Daniel says of her job. And to Daniel, that’s exciting not just for her but for the future of human communication.

I spoke to her about how she sees her role and the future of emoji. The interview has been lightly edited and condensed. 

What does it mean to chair the subcommittee on emoji? What do you do?

It’s not sexy. [laughs] A lot of it is managing volunteers [the committee is composed of volunteers who review applications and help in approval and design]. There’s a lot of paperwork. A lot of meetings. We meet twice a week.

I read a lot and talk to a lot of people. I recently talked to a gesture linguist to learn how people use their hands in different cultures. How do we make better hand-gesture emoji? If the image is no good or isn’t clear, it’s a dealbreaker. I’m constantly doing lots of research and consulting with different experts. I’ll be on the phone with a botanical garden about flowers, or a whale expert to get the whale emoji right, or a cardiovascular surgeon so we have the anatomy of the heart down. 

There’s an old essay by Beatrice Warde about typography. She asked if a good typeface is a bedazzled crystal goblet or a transparent one. Some would say the ornate one because it’s so fancy, and others would say the crystal goblet because you can see and appreciate the wine. With emoji, I lend myself more to the “transparent crystal goblet” philosophy. 

Why should we care about how our emoji are designed?

My understanding is that 80% of communication is nonverbal. There’s a parallel in how we communicate. We text how we talk. It’s informal, it’s loose. You’re pausing to take a breath. Emoji are shared alongside words.

When emoji first came around, we had the misconception that they were ruining language. Learning a new language is really hard, and emoji is kind of like a new language. It works with how you already communicate. It evolves as you evolve. How you communicate and present yourself evolves, just like yourself. You can look at the nearly 3,000 emoji and it [their interpretation] changes by age or gender or geographic area. When we talk to someone and are making eye contact, you shift your body language, and that’s an emotional contagion. It builds empathy and connection. It gives you permission to reveal that about yourself. Emoji can do that, all in an image.

Continue Reading

Tech

Product design gets an AI makeover

Published

on

Product design gets an AI makeover


It’s a tall order, but one that Zapf says artificial intelligence (AI) technology can support by capturing the right data and guiding engineers through product design and development.

No wonder a November 2020 McKinsey survey reveals that more than half of organizations have adopted AI in at least one function, and 22% of respondents report at least 5% of their companywide earnings are attributable to AI. And in manufacturing, 71% of respondents have seen a 5% or more increase in revenue with AI adoption.

But that wasn’t always the case. Once “rarely used in product development,” AI has experienced an evolution over the past few years, Zapf says. Today, tech giants known for their innovations in AI, such as Google, IBM, and Amazon, “have set new standards for the use of AI in other processes,” such as engineering.

“AI is a promising and exploratory area that can significantly improve user experience for designing engineers, as well as gather relevant data in the development process for specific applications,” says Katrien Wyckaert, director of industry solutions for Siemens Industry Software.

The result is a growing appreciation for a technology that promises to simplify complex systems, get products to market faster, and drive product innovation.

Simplifying complex systems

A perfect example of AI’s power to overhaul product development is Renault. In response to increasing consumer demand, the French automaker is equipping a growing number of new vehicle models with an automated manual transmission (AMT)—a system that behaves like an automatic transmission but allows drivers to shift gears electronically using a push-button command.

AMTs are popular among consumers, but designing them can present formidable challenges. That’s because an AMT’s performance depends on the operation of three distinct subsystems: an electro-mechanical actuator that shifts the gears, electronic sensors that monitor vehicle status, and software embedded in the transmission control unit, which controls the engine. Because of this complexity, it can take up to a year of extensive trial and error to define the system’s functional requirements, design the actuator mechanics, develop the necessary software, and validate the overall system.

In an effort to streamline its AMT development process, Renault turned to Simcenter Amesim software from Siemens Digital Industries Software. The simulation technology relies on artificial neural networks, AI “learning” systems loosely modeled on the human brain. Engineers simply drag, drop, and connect icons to graphically create a model. When displayed on a screen as a sketch, the model illustrates the relationship between all the various elements of an AMT system. In turn, engineers can predict the behavior and performance of the AMT and make any necessary refinements early in the development cycle, avoiding late-stage problems and delays. In fact, by using a virtual engine and transmissions as stand-ins while developing hardware, Renault has managed to cut its AMT development time almost in half.

Speed without sacrificing quality

So, too, are emerging environmental standards prompting Renault to rely more heavily on AI. To comply with emerging carbon dioxide emissions standards, Renault has been working on the design and development of hybrid vehicles. But hybrid engines are far more complex to develop than those found in vehicles with a single energy source, such as a conventional car. That’s because hybrid engines require engineers to perform complex feats like balancing the power required from multiple energy sources, choosing from a multitude of architectures, and examining the impact of transmissions and cooling systems on a vehicle’s energy performance.

“To meet new environmental standards for a hybrid engine, we must completely rethink the architecture of gasoline engines,” says Vincent Talon, head of simulation at Renault. The problem, he adds, is that carefully examining “the dozens of different actuators that can influence the final results of fuel consumption and pollutant emissions” is a lengthy and complex process, made by more difficult by rigid timelines.

“Today, we clearly don’t have the time to painstakingly evaluate various hybrid powertrain architectures,” says Talon. “Rather, we needed to use an advanced methodology to manage this new complexity.”

For more on AI in industrial applications, visit www.siemens.com/artificialintelligence.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Tech

AI consumes a lot of energy. Hackers could make it consume more.

Published

on

AI consumes a lot of energy. Hackers could make it consume more.


The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation. 

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”

Continue Reading

Copyright © 2020 Diliput News.