Connect with us

Tech

Meet Altos Labs, Silicon Valley’s latest wild bet on living forever

Published

on

Meet Altos Labs, Silicon Valley’s latest wild bet on living forever


The new company, incorporated in the US and in the UK earlier this year, will establish several institutes in places including the Bay Area, San Diego, Cambridge, UK and Japan, and is recruiting a large cadre of university scientists with lavish salaries and the promise that they can pursue unfettered blue-sky research on how cells age and how to reverse that process.

Some people briefed by the company have been told that its investors include Jeff Bezos, the world’s richest person, who stepped down as CEO of Amazon in July and weeks later risked his life by jumping into a rocket capsule to reach outer space. Milner and his wife Julia confirmed through a spokesperson they are investors in Altos through a foundation.

Altos is certain to draw comparisons to Calico Labs, a longevity company announced in 2013 by Google co-founder, Larry Page. Calico also hired elite scientific figures and gave them generous budgets, although it’s been questioned whether the Google spinout has made much progress. Calico has also started a lab whose focus is reprogramming; it published its first preprint on the topic this year.

Among the scientists said to be joining Altos are Juan Carlos Izpisúa Belmonte, a Spanish biologist at the Salk Institute, in La Jolla, California, who has won notoriety for research mixing human and monkey embryos and who has predicted that human lifespans could be increased by 50 years. Salk declined to comment.

Also joining is Steve Horvath, a UCLA professor and developer of a “biological clock” that can accurately measure human aging. Shinya Yamanaka, who shared a 2012 Nobel Prize for the discovery of reprogramming, will be an unpaid senior scientist and will chair the company’s scientific advisory board.

Yamanaka’s breakthrough discovery was that with the addition of just four proteins, now known as Yamanaka factors, cells can be instructed to revert to a primitive state with the properties of embryonic stem cells. By 2016, Izpisúa Belmonte’s lab had applied these factors to entire living mice, achieving signs of age reversal and leading him to term reprogramming a potential “elixir of life.”

The results of such mouse experiments, while tantalizing, were also frightening. Depending on how much reprogramming occurred, some mice developed ugly embryonic tumors called teratomas, even as others showed signs their tissues had become younger.

“Although there are many hurdles to overcome, there is huge potential,” Yamanaka said in an email, in which he confirmed his role in Altos.

Mid-life crisis?

It’s been said that young people dream of being rich, and rich people dream of being young. That paradox is one that people like Milner, age 59, and Bezos, who is 57 years old, may feel acutely. Forbes currently ranks Bezos as the world’s richest person, with a net worth of around $200 billion. Milner’s wealth is estimated at $4.8 billion.

Bezos Expeditions, the investment office of Amazon’s founder, did not reply to an email seeking comment.

People familiar with the formation of Altos say that initially Milner’s interest in reprogramming was philanthropic. After the meeting at his home, a non-profit called the Milky Way Research Foundation sponsored by Milner awarded three-year grants, of $1 million a year, to several longevity researchers. The proposals were considered by an advisory board including Yamanaka and Jennifer Doudna, who shared a Breakthrough Prize in 2015 and later a Nobel in 2020 for her co-discovery of CRISPR genome editing.

Tech

These weird virtual creatures evolve their bodies to solve problems

Published

on

These weird virtual creatures evolve their bodies to solve problems


“It’s already known that certain bodies accelerate learning,” says Bongard. “This work shows that AI that can search for such bodies.” Bongard’s lab has developed robot bodies that are adapted to particular tasks, such as giving callus-like coatings to feet to reduce wear and tear. Gupta and his colleagues extend this idea, says Bongard. “They show that the right body can also speed up changes in the robot’s brain.”

Ultimately, this technique could reverse the way we think of building physical robots, says Gupta. Instead of starting with a fixed body configuration and then training the robot to do a particular task, you could use DERL to let the optimal body plan for that task evolve and then build that.

Gupta’s unimals are part of a broad shift in how researchers are thinking about AI. Instead of training AIs on specific tasks, such as playing Go or analyzing a medical scan, researchers are starting to drop bots into virtual sandboxes—such as POET, OpenAI’s virtual hide-and-seek arena, and DeepMind’s virtual playground XLand—and getting them to learn how to solve multiple tasks in ever-changing, open-ended training dojos. Instead of mastering a single challenge, AIs trained in this way learn general skills.

For Gupta, free-form exploration will be key for the next generation of AIs. “We need truly open-ended environments to create intelligent agents,” he says.

Continue Reading

Tech

Rediscover trust in cybersecurity

Published

on

Rediscover trust in cybersecurity


The world has changed dramatically in a short amount of time—changing the world of work along with it. The new hybrid remote and in-office work world has ramifications for tech—specifically cybersecurity—and signals that it’s time to acknowledge just how intertwined humans and technology truly are.

Enabling a fast-paced, cloud-powered collaboration culture is critical to rapidly growing companies, positioning them to out innovate, outperform, and outsmart their competitors. Achieving this level of digital velocity, however, comes with a rapidly growing cybersecurity challenge that is often overlooked or deprioritized : insider risk, when a team member accidentally—or not—shares data or files outside of trusted parties. Ignoring the intrinsic link between employee productivity and insider risk can impact both an organizations’ competitive position and its bottom line. 

You can’t treat employees the same way you treat nation-state hackers

Insider risk includes any user-driven data exposure event—security, compliance or competitive in nature—that jeopardizes the financial, reputational or operational well-being of a company and its employees, customers, and partners. Thousands of user-driven data exposure and exfiltration events occur daily, stemming from accidental user error, employee negligence, or malicious users intending to do harm to the organization. Many users create insider risk accidentally, simply by making decisions based on time and reward, sharing and collaborating with the goal of increasing their productivity. Other users create risk due to negligence, and some have malicious intentions, like an employee stealing company data to bring to a competitor. 

From a cybersecurity perspective, organizations need to treat insider risk differently than external threats. With threats like hackers, malware, and nation-state threat actors, the intent is clear—it’s malicious. But the intent of employees creating insider risk is not always clear—even if the impact is the same. Employees can leak data by accident or due to negligence. Fully accepting this truth requires a mindset shift for security teams that have historically operated with a bunker mentality—under siege from the outside, holding their cards close to the vest so the enemy doesn’t gain insight into their defenses to use against them. Employees are not the adversaries of a security team or a company—in fact, they should be seen as allies in combating insider risk.

Transparency feeds trust: Building a foundation for training

All companies want to keep their crown jewels—source code, product designs, customer lists—from ending up in the wrong hands. Imagine the financial, reputational, and operational risk that could come from material data being leaked before an IPO, acquisition, or earnings call. Employees play a pivotal role in preventing data leaks, and there are two crucial elements to turning employees into insider risk allies: transparency and training. 

Transparency may feel at odds with cybersecurity. For cybersecurity teams that operate with an adversarial mindset appropriate for external threats, it can be challenging to approach internal threats differently. Transparency is all about building trust on both sides. Employees want to feel that their organization trusts them to use data wisely. Security teams should always start from a place of trust, assuming the majority of employees’ actions have positive intent. But, as the saying goes in cybersecurity, it’s important to “trust, but verify.” 

Monitoring is a critical part of managing insider risk, and organizations should be transparent about this. CCTV cameras are not hidden in public spaces. In fact, they are often accompanied by signs announcing surveillance in the area. Leadership should make it clear to employees that their data movements are being monitored—but that their privacy is still respected. There is a big difference between monitoring data movement and reading all employee emails.

Transparency builds trust—and with that foundation, an organization can focus on mitigating risk by changing user behavior through training. At the moment, security education and awareness programs are niche. Phishing training is likely the first thing that comes to mind due to the success it’s had moving the needle and getting employees to think before they click. Outside of phishing, there is not much training for users to understand what, exactly, they should and shouldn’t be doing.

For a start, many employees don’t even know where their organizations stand. What applications are they allowed to use? What are the rules of engagement for those apps if they want to use them to share files? What data can they use? Are they entitled to that data? Does the organization even care? Cybersecurity teams deal with a lot of noise made by employees doing things they shouldn’t. What if you could cut down that noise just by answering these questions?

Training employees should be both proactive and responsive. Proactively, in order to change employee behavior, organizations should provide both long- and short-form training modules to instruct and remind users of best behaviors. Additionally, organizations should respond with a micro-learning approach using bite-sized videos designed to address highly specific situations. The security team needs to take a page from marketing, focusing on repetitive messages delivered to the right people at the right time. 

Once business leaders understand that insider risk is not just a cybersecurity issue, but one that is intimately intertwined with an organization’s culture and has a significant impact on the business, they will be in a better position to out-innovate, outperform, and outsmart their competitors. In today’s hybrid remote and in-office work world, the human element that exists within technology has never been more significant.That’s why transparency and training are essential to keep data from leaking outside the organization. 

This content was produced by Code42. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Tech

How AI is reinventing what computers are

Published

on

How AI is reinventing what computers are


Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, there’s something remarkable going on. 

Google’s latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And it’s changing how we think about computing.

What does that mean? Well, computers haven’t changed much in 40 or 50 years. They’re smaller and faster, but they’re still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how they’re programmed, and how they’re used. Ultimately, it will change what they are for. 

“The core of computing is changing from number-crunching to decision-­making,” says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes. 

More haste, less speed

The first change concerns how computers—and the chips that control them—are made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moore’s Law. 

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second. 

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware. 

For example, the chip inside the Pixel 6 is a new mobile version of Google’s tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-­precision calculations required by neural networks. Google has used these chips in-house since 2015: they process people’s photos and natural-­language search queries. Google’s sister company DeepMind uses them to train its AIs. 

In the last couple of years, Google has made TPUs available to other companies, and these chips—as well as similar ones being developed by others—are becoming the default inside the world’s data centers. 

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-­learning algorithm—a type of AI that learns how to solve a task through trial and error—to design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think of—but they worked. This kind of AI could one day develop better, more efficient chips. 

Show, don’t tell

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK. 

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. It’s a fundamentally different way of thinking. 

Continue Reading

Copyright © 2020 Diliput News.