Connect with us

Tech

A paralyzed man is challenging Neuralink’s monkey to a match of mind Pong

Published

on

Nathan practices Pong


Copeland already uses mental commands to play video games including Sega classics like Sonic the Hedgehog. He admits it was a “tough” question whether to challenge Musk’s monkey or not. “I could get my ass beat,” he says. “But yeah, I would play.”

Copeland issued the challenge in an interview and on today’s episode of the national public radio program Science Friday, where he appeared to discuss brain interfaces.

Neuralink, a secretive company established by Musk in 2016, did not respond to our attempts to relay the Pong challenge.

Nathan Copeland using a neural implant to play Pong with his mind this week at the University of Pittsburgh.

COURTESY OF NATHAN COPELAND

Playing at home

Brain interfaces work by recording the electrical firing of neurons in the motor cortex, the part of the brain which controls movement. Each neuron’s firing rate contains information about movements a subject is making or merely imagining. A “decoder” program then translates the signals into a command that can be conveyed to a computer cursor.

Copeland is one of a handful of humans with an older style of implant, called a Utah array, which he uses in experiments at the University of Pittsburgh to do things including moving robotic arms. Before Copeland performs a task, he begins with a 10-minute training session so an algorithm can map firing signals from his neurons to specific movements. After such a session, Copeland says, he can think a computer cursor left or right, forward or back. Thinking of closing his hand causes a mouse click.

Beginning last March, the Pittsburgh team arranged for Copeland to use his brain implant on his own, at home, to operate a tablet computer. He’s used it to surf the web and draw pictures of a cat with a painting program. Last spring, he was using it six hours a day. “It got me through the pandemic,” he says.

MS Paint cat
This picture of a cat was drawn by Nathan Copeland, who is paralyzed but uses a brain-computer interface to control a computer. The image is for sale as a non-fungible token.

NATHAN COPELAND

The tablet is not particularly powerful, though. And he can only use it with batteries. He’s not supposed to plug his brain into any device directly connected to the electrical grid, since no one knows what effect a power surge could have. “I have encouraged him to be careful what software he puts on it,” says Jeffrey Weiss, a Pittsburgh researcher who works with Copeland. “I don’t have restrictions other than not to break the thing, and don’t get malware on it. It’s just a Windows machine.”

Copeland’s interface was installed by a neurosurgeon six years ago. He has four silicon implants in all. The two on his motor cortex allow him to control a robotic arm used in experiments or a computer cursor. Another two, in the somatosensory part of his brain, allow scientists to send signals into his mind, which he registers as sensations of pressure or tingling on his fingers.

The monkey’s advantage

If a mind match occurs, Neuralink’s primate would have the advantage of a next-generation interface, which the company calls “the Link.” While Copeland has to attach cables to two ports on his skull, Neuralink’s implant is about the size of a soda bottle cap and is embedded entirely in the skull. It transmits the brain recordings wirelessly, via Bluetooth.

“It’s a very promising device, but it’s new, and there are many questions about it,” Weiss says. “No one outside Neuralink has been able to get a look at it.” The company has said it hopes to recruit human subjects, but that will depend on how the implant holds up in animals, including pigs, which Neuralink is performing tests on. “No one knows if it’s going to last six months or six years,” says Weiss.

Tech

These weird virtual creatures evolve their bodies to solve problems

Published

on

These weird virtual creatures evolve their bodies to solve problems


“It’s already known that certain bodies accelerate learning,” says Bongard. “This work shows that AI that can search for such bodies.” Bongard’s lab has developed robot bodies that are adapted to particular tasks, such as giving callus-like coatings to feet to reduce wear and tear. Gupta and his colleagues extend this idea, says Bongard. “They show that the right body can also speed up changes in the robot’s brain.”

Ultimately, this technique could reverse the way we think of building physical robots, says Gupta. Instead of starting with a fixed body configuration and then training the robot to do a particular task, you could use DERL to let the optimal body plan for that task evolve and then build that.

Gupta’s unimals are part of a broad shift in how researchers are thinking about AI. Instead of training AIs on specific tasks, such as playing Go or analyzing a medical scan, researchers are starting to drop bots into virtual sandboxes—such as POET, OpenAI’s virtual hide-and-seek arena, and DeepMind’s virtual playground XLand—and getting them to learn how to solve multiple tasks in ever-changing, open-ended training dojos. Instead of mastering a single challenge, AIs trained in this way learn general skills.

For Gupta, free-form exploration will be key for the next generation of AIs. “We need truly open-ended environments to create intelligent agents,” he says.

Continue Reading

Tech

Rediscover trust in cybersecurity

Published

on

Rediscover trust in cybersecurity


The world has changed dramatically in a short amount of time—changing the world of work along with it. The new hybrid remote and in-office work world has ramifications for tech—specifically cybersecurity—and signals that it’s time to acknowledge just how intertwined humans and technology truly are.

Enabling a fast-paced, cloud-powered collaboration culture is critical to rapidly growing companies, positioning them to out innovate, outperform, and outsmart their competitors. Achieving this level of digital velocity, however, comes with a rapidly growing cybersecurity challenge that is often overlooked or deprioritized : insider risk, when a team member accidentally—or not—shares data or files outside of trusted parties. Ignoring the intrinsic link between employee productivity and insider risk can impact both an organizations’ competitive position and its bottom line. 

You can’t treat employees the same way you treat nation-state hackers

Insider risk includes any user-driven data exposure event—security, compliance or competitive in nature—that jeopardizes the financial, reputational or operational well-being of a company and its employees, customers, and partners. Thousands of user-driven data exposure and exfiltration events occur daily, stemming from accidental user error, employee negligence, or malicious users intending to do harm to the organization. Many users create insider risk accidentally, simply by making decisions based on time and reward, sharing and collaborating with the goal of increasing their productivity. Other users create risk due to negligence, and some have malicious intentions, like an employee stealing company data to bring to a competitor. 

From a cybersecurity perspective, organizations need to treat insider risk differently than external threats. With threats like hackers, malware, and nation-state threat actors, the intent is clear—it’s malicious. But the intent of employees creating insider risk is not always clear—even if the impact is the same. Employees can leak data by accident or due to negligence. Fully accepting this truth requires a mindset shift for security teams that have historically operated with a bunker mentality—under siege from the outside, holding their cards close to the vest so the enemy doesn’t gain insight into their defenses to use against them. Employees are not the adversaries of a security team or a company—in fact, they should be seen as allies in combating insider risk.

Transparency feeds trust: Building a foundation for training

All companies want to keep their crown jewels—source code, product designs, customer lists—from ending up in the wrong hands. Imagine the financial, reputational, and operational risk that could come from material data being leaked before an IPO, acquisition, or earnings call. Employees play a pivotal role in preventing data leaks, and there are two crucial elements to turning employees into insider risk allies: transparency and training. 

Transparency may feel at odds with cybersecurity. For cybersecurity teams that operate with an adversarial mindset appropriate for external threats, it can be challenging to approach internal threats differently. Transparency is all about building trust on both sides. Employees want to feel that their organization trusts them to use data wisely. Security teams should always start from a place of trust, assuming the majority of employees’ actions have positive intent. But, as the saying goes in cybersecurity, it’s important to “trust, but verify.” 

Monitoring is a critical part of managing insider risk, and organizations should be transparent about this. CCTV cameras are not hidden in public spaces. In fact, they are often accompanied by signs announcing surveillance in the area. Leadership should make it clear to employees that their data movements are being monitored—but that their privacy is still respected. There is a big difference between monitoring data movement and reading all employee emails.

Transparency builds trust—and with that foundation, an organization can focus on mitigating risk by changing user behavior through training. At the moment, security education and awareness programs are niche. Phishing training is likely the first thing that comes to mind due to the success it’s had moving the needle and getting employees to think before they click. Outside of phishing, there is not much training for users to understand what, exactly, they should and shouldn’t be doing.

For a start, many employees don’t even know where their organizations stand. What applications are they allowed to use? What are the rules of engagement for those apps if they want to use them to share files? What data can they use? Are they entitled to that data? Does the organization even care? Cybersecurity teams deal with a lot of noise made by employees doing things they shouldn’t. What if you could cut down that noise just by answering these questions?

Training employees should be both proactive and responsive. Proactively, in order to change employee behavior, organizations should provide both long- and short-form training modules to instruct and remind users of best behaviors. Additionally, organizations should respond with a micro-learning approach using bite-sized videos designed to address highly specific situations. The security team needs to take a page from marketing, focusing on repetitive messages delivered to the right people at the right time. 

Once business leaders understand that insider risk is not just a cybersecurity issue, but one that is intimately intertwined with an organization’s culture and has a significant impact on the business, they will be in a better position to out-innovate, outperform, and outsmart their competitors. In today’s hybrid remote and in-office work world, the human element that exists within technology has never been more significant.That’s why transparency and training are essential to keep data from leaking outside the organization. 

This content was produced by Code42. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Tech

How AI is reinventing what computers are

Published

on

How AI is reinventing what computers are


Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on. 

Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.

What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes. 

More haste, less speed

The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law. 

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second. 

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware. 

For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-­precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-­language search queries. Google’s sister company DeepMind uses them to train its AIs. 

In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers. 

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips. 

Show, don’t tell

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK. 

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking. 

Continue Reading

Copyright © 2020 Diliput News.