Connect with us

Tech

A new age of data means embracing the edge

Published

on

A new age of data means embracing the edge


Artificial intelligence holds an enormous promise, but to be effective, it must learn from massive sets of data—and the more diverse the better. By learning patterns, AI tools can uncover insights and help decision-making not just in technology, but also pharmaceuticals, medicine, manufacturing, and more. However, data can’t always be shared—whether it’s personally identifiable, holds proprietary information, or to do so would be a security concern—until now.

“It’s going to be a new age.” Says Dr. Eng Lim Goh, senior vice president and CTO of artificial intelligence at Hewlett Packard Enterprise. “The world will shift from one where you have centralized data, what we’ve been used to for decades, to one where you have to be comfortable with data being everywhere.”

Data everywhere means the edge, where each device, server, and cloud instance collect massive amounts of data. One estimate has the number of connected devices at the edge increasing to 50 billion by 2022. The conundrum: how to keep collected data secure but also be able to share learnings from the data, which, in turn, helps teach AI to be smarter. Enter swarm learning.

Swarm learning, or swarm intelligence, is how swarms of bees or birds move in response to their environment. When applied to data Goh explains, there is “more peer-to-peer communications, more peer-to-peer collaboration, more peer-to-peer learning.” And Goh continues, “That’s the reason why swarm learning will become more and more important as …as the center of gravity shifts” from centralized to decentralized data.

Consider this example, says Goh. “A hospital trains their machine learning models on chest X-rays and sees a lot of tuberculosis cases, but very little of lung collapsed cases. So therefore, this neural network model, when trained, will be very sensitive to what’s detecting tuberculosis and less sensitive towards detecting lung collapse.” Goh continues, “However, we get the converse of it in another hospital. So what you really want is to have these two hospitals combine their data so that the resulting neural network model can predict both situations better. But since you can’t share that data, swarm learning comes in to help reduce that bias of both the hospitals.”

And this means, “each hospital is able to predict outcomes, with accuracy and with reduced bias, as though you have collected all the patient data globally in one place and learned from it,” says Goh.

And it’s not just hospital and patient data that must be kept secure. Goh emphasizes “What swarm learning does is to try to avoid that sharing of data, or totally prevent the sharing of data, to [a model] where you only share the insights, you share the learnings. And that’s why it is fundamentally more secure.”

Show notes and links:

Full transcript:

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma. And this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is decentralized data. Whether it’s from devices, sensors, cars, the edge, if you will, the amount of data collected is growing. It can be personal and it must be protected. But is there a way to share insights and algorithms securely to help other companies and organizations and even vaccine researchers?

Two words for you: swarm learning.

My guest is Dr. Eng Lim Goh, who’s the senior vice president and CTO of artificial intelligence at Hewlett Packard Enterprise. Prior to this role, he was CTO for a majority of his 27 years at Silicon Graphics, now an HPE company. Dr. Goh was awarded NASA’s Exceptional Technology Achievement Medal for his work on AI in the International Space Station. He has also worked on numerous artificial intelligence research projects from F1 racing, to poker bots, to brain simulations. Dr. Goh holds a number of patents and had a publication land on the cover of Nature. This episode of Business Lab is produced in association with Hewlett Packard Enterprise. Welcome Dr. Goh.

Dr. Eng Lim Goh: Thank you for having me.

Laurel: So, we’ve started a new decade with a global pandemic. The urgency of finding a vaccine has allowed for greater information sharing between researchers, governments and companies. For example, the World Health Organization made the Pfizer vaccine’s mRNA sequence public to help researchers. How are you thinking about opportunities like this coming out of the pandemic?

Eng Lim: In science and medicine and others, sharing of findings is an important part of advancing science. So the traditional way is publications. The thing is, in a year, year and a half, of covid-19, there has been a surge of publications related to covid-19. One aggregator had, for example, the order of 300,000 of such documents related to covid-19 out there. It gets difficult, because of the amount of data, to be able to get what you need.

So a number of companies, organizations, started to build these natural language processing tools, AI tools, to allow you to ask very specific questions, not just search for keywords, but very specific questions so that you can get the answer that you need from this corpus of documents out there. A scientist could ask, or a researcher could ask, what is the binding energy of the SARS-CoV-2 spike protein to our ACE-2 receptor? And can be even more specific and saying, I want it in units of kcal per mol. And the system would go through. The NLP system would go through this corpus of documents and come up with an answer specific to that question, and even point to the area of the documents, where the answer could be. So this is one area. To help with sharing, you could build AI tools to help go through this enormous amount of data that has been generated.

The other area of sharing is sharing of a clinical trial data, as you have mentioned. Early last year, before any of the SARS-CoV-2 vaccine clinical trials had started, we were given the yellow fever vaccine clinical trial data. And even more specifically, the gene expression data from the volunteers of the clinical trial. And one of the goals is, can you analyze the tens of thousands of these genes being expressed by the volunteers and help predict, for each volunteer, whether he or she would get side-effects from this vaccine, and whether he or she will give good antibody response to this vaccine? So building predictive tools by sharing this clinical trial data, albeit anonymized and in a restricted way.

Laurel: When we talk about natural language processing, I think the two takeaways that we’ve taken from that very specific example are, you can build better AI tools to help the researchers. And then also, it helps build predictive tools and models.

Eng Lim: Yes, absolutely.

Laurel: So, as a specific example of what you’ve been working on for the past year, Nature Magazine recently published an article about how a collaborative approach to data insights can help these stakeholders, especially during a pandemic. What did you find out during that work?

Eng Lim: Yes. This is related, again, to the sharing point you brought about, how to share learning so that the community can advance faster. The Nature publication you mentioned, the title of it is “Swarm Learning [for Decentralized and Confidential Clinical Machine Learning]”. Let’s use the hospital example. There is this hospital, and it sees its patients, the hospital’s patients, of a certain demographic. And if it wants to build a machine learning model to predict based on patient data, say for example a patient’s CT scan data, to try and predict certain outcomes. The issue with learning in isolation like this is, you start to evolve models through this learning of your patient data biased to what’s the demographics you are seeing. Or in other ways, biased towards the type of medical devices you have.

The solution to this is to collect data from different hospitals, maybe from different regions or even different countries. And then combine all these hospitals’ data and then train the machine learning model on the combined data. The issue with this is that privacy of patient data prevents you from sharing that data. Swarm learning comes in to try and solve this, in two ways. One, instead of collecting data from these different hospitals, we allow each hospital to train their machine learning model on their own private patient data. And then occasionally, a blockchain comes in. That’s the second way. A blockchain comes in and collects all the learnings. I emphasize. The learnings, and not the patient data. Collect only the learnings and combine it with the learnings from other hospitals in other regions and other countries, average them and then send back down to all the hospitals, the updated globally combined averaged learnings.

And by learnings I mean the parameters, for example, of the neural network weights. The parameters which are the neural network weights in the machine learning model. So in this case, no patient data ever leaves an individual hospital. What leaves the hospital is only the learnings, the parameters or the neural network weights. And so, when you sent up your locally learned parameters, and what you get back from the blockchain is the global averaged parameters. And then you update your model with the global average, and then you carry on learning locally again. After a few cycles of these sharing of learnings, we’ve tested it, each hospital is able to predict, with accuracy and with reduced bias, as though you have collected all the patient data globally in one place, and learned from it.

Laurel: And the reason that blockchain is used is because it is actually a secure connection between various, in this case, machines, correct?

Eng Lim: There are two reasons, yes, why we use blockchain. The first reason is the security of it. And number two, we can keep that information private because, in a private blockchain, only participants, main participants or certified participants, are allowed in this blockchain. Now, even if the blockchain is compromised, what is only seen are the weights or the parameters of the learnings, not the private patient data, because the private patient data is not in the blockchain.

And the second reason for using a blockchain, it is as opposed to having a central custodian that does the collection of the parameters, of the learnings. Because once you appoint a custodian, an entity, that collects all these learnings, if one of the hospitals becomes that custodian, then you have a situation where that appointed custodian has more information than the rest, or has more capability than the rest. Not so much more information, but more capability than the rest. So in order to have a more equitable sharing, we use a blockchain. And in the blockchain system, what it does is that randomly appoints one of the participants as the collector, as the leader, to collect the parameters, average it and send it back down. And in the next cycle, randomly, another participant is appointed.

Laurel: So, there’s two interesting points here. One is, this project succeeds because you are not using only your own data. You are allowed to opt into this relationship to use the learnings from other researchers’ data as well. So that reduces bias. So that’s one kind of large problem solved. But then also this other interesting issue of equity and how even algorithms can perhaps be less equitable from time to time. But when you have an intentionally random algorithm in the blockchain assigning leadership for the collection of the learnings from each entity, that helps strip out any kind of possible bias as well, correct?

Eng Lim: Yes, yes, yes. Brilliant summary, Laurel. So there’s the first bias, which is, if you are learning in isolation, the hospital is learning, a neural network model, or a machine learning model, more generally, of a hospital is learning in isolation only on their own private patient data, they will be naturally biased towards the demographics they are seeing. For example, we have an example where a hospital trains their machine learning models on chest x-rays and sees a lot of tuberculosis cases. But very little of lung collapsed cases. So therefore, this neural network model, when trained, will be very sensitive to what’s detecting tuberculosis and less sensitive towards detecting lung collapse, for example. However, we get the converse of it in another hospital. So what you really want is to have these two hospitals combine their data so that the resulting neural network model can predict both situations better. But since you can’t share that data, swarm learning comes in to help reduce that bias of both the hospitals.

Laurel: All right. So we have an enormous amount of data. And it keeps growing exponentially as the edge, which is really any data generating device, system or sensor, expands. So how is decentralized data changing the way companies need to think about data?

Eng Lim: Oh, that’s a profound question. There is one estimate that says that by next year, by the year 2022, there will be 50 billion connected devices at the edge. And this is growing fast. And we’re coming to a point that we have an average of about 10 connected devices potentially collecting data, per person, in this world. Given that situation, the center of gravity will shift from the data center being the main location generating data to one where the center of gravity will be at the edge in terms of where data is generated. And this will change dynamics tremendously for enterprises. You will therefore see the need for these devices that are out there where this enormous amount of data generated at the edge with so much of these devices out there that you’ll reach a point where you cannot afford to backhaul or bring back all that data to the cloud or data center anymore.

Even with 5G, 6G and so on. The growth of data will outstrip that, will far exceed that of the growth in bandwidth of these new telecommunication capabilities. As such, you’ll reach a point where you have no choice but to push the intelligence to the edge so that you can decide what data to move back to the cloud or data center. So it’s going to be a new age. The world will shift from one where you have centralized data, what we’ve been used to for decades, to one where you have to be comfortable with data being everywhere. And when that’s the case, you need to do more peer-to-peer communications, more peer-to-peer collaboration, more peer-to-peer learning.

And that’s the reason why swarm learning will become more and more important as this progresses, as the center of gravity shifts out there from one where data is centralized, to one where data is everywhere.

Laurel: Could you talk a little bit more about how swarm intelligence is secure by design? In other words, it allows companies to share insights from data learnings with outside enterprises, or even within groups in a company, but then they don’t actually share the actual data?

Eng Lim: Yes. Fundamentally, when we want to learn from each other, one way is, we share the data so that each of us can learn from each other. What swarm learning does is to try to avoid that sharing of data, or totally prevent the sharing of data, to [a model] where you only share the insights, you share the learnings. And that’s why it is fundamentally more secure, using this approach, where data stays private in the location and never leaves that private entity. What leaves that private entity are only the learnings. And in this case, the neural network weights or the parameters of those learnings.

Now, there are people who are researching the ability to deduce the data from the learnings, it is still in research phase, but we are prepared if it ever works. And that is, in the blockchain, we do homomorphic encryption of the weights, of the parameters, of the learnings. By homomorphic, we mean when the appointed leader collects all these weights and then averages them, you can average them in the encrypted form so that if someone intercepts the blockchain, they see encrypted learnings. They don’t see the learnings themselves. But we’ve not implemented that yet, because we don’t see it necessary yet until such time we see that being able to reverse engineer the data from the learnings becomes feasible.

Laurel: And so, when we think about increasing rules and legislation surrounding data, like GDPR and California’s CCPA, there needs to be some sort of solution to privacy concerns. Do you see swarm learning as one of those possible options as companies grow the amount of data they have?

Eng Lim: Yes, as an option. First, if there is a need for edge devices to learn from each other, swarm learning is there, is useful for it. And number two, as you are learning, you do not want the data from each entity or participant in swarm learning to leave that entity. It should only stay where it is. And what leaves is only the parameters and the learnings. You see that not just in a hospital scenario, but you see that in finance. Credit card companies, for example, of course, wouldn’t want to share their customer data with another competitor credit card company. But they know that the learnings of the machine learning models locally is not as sensitive to fraud data because they are not seeing all the different kinds of fraud. Perhaps they’re seeing one kind of fraud, but a different credit card company might be seeing another kind of fraud.

Swarm learning could be used here where each credit card company keeps their customer data private, no sharing of that. But a blockchain comes in and shares the learnings, the fraud data learning, and collects all those learnings, averaged it and giving it back out to all the participating credit card companies. So this is one example. Banks could do the same. Industrial robots could do the same too.

We have an automotive customer that has tens of thousands of industrial robots, but in different countries. Industrial robots today follow instructions. But in the next generation robots, with AI, they will also learn locally, say for example, to avoid certain mistakes and not repeat them. What you can do, using swarm learning is, if these robots are in different countries where you cannot share data, sensor data from the local environment across country borders, but you’re allowed to share the learnings of avoiding these mistakes, swarm learning can therefore be applied. So you now imagine a swarm of industrial robots, across different countries, sharing learnings so that they don’t repeat the same mistakes.

So yes. In enterprise, you can see different applications of swarm learning. Finance, engineering, and of course, in healthcare, as we’ve discussed.

Laurel: How do you think companies need to start thinking differently about their actual data architecture to encourage the ability to share these insights, but not actually share the data?

Eng Lim: First and foremost, we need to be comfortable with the fact that devices that are collecting data will proliferate. And they will be at the edge where the data first lands. What’s the edge? The edge is where you have a device, and where the data first lands electronically. And if you imagine 50 billion of them next year, for example, and growing, in one estimate, we need to be comfortable with the fact that data will be everywhere. And to design your organization, design the way you use data, design the way you access data with that concept in mind, i.e., moving from one which we are used to, that is data being centralized most of the time, to one where data is everywhere. So the way you access data needs to be different now. You cannot now think of first aggregating all the data, pulling all the data, backhauling all the data from the edge to a centralized location, then work with it. We may need to switch to a scenario where we are operating on the data, learning from the data while the data are still out there.

Laurel: So, we talked a bit healthcare and manufacturing. How do you also envision the big ideas of smart cities and autonomous vehicles fitting in with the ideas of swarm intelligence?

Eng Lim: Yes, yes, yes. These are two big, big items. And very similar also, you think of a smart city, it is full of sensors, full of connected devices. You think of autonomous cars, one estimate puts it at something like 300 sensing devices in a car, all collecting data. A similar way of thinking of it, data is going to be everywhere, and collected in real time at these edge devices. For smart cities, it could be street lights. We work with one city with 200,000 street lights. And they want to make every one of these street lights smart. By smart, I mean ability to recommend decisions or even make decisions. You get to a point where, as I’ve said before, you cannot backhaul all the data all the time to the data center and make decisions after you’ve done the aggregation. A lot of times you have to make decisions where the data is collected. And therefore, things have to be smart at the edge, number one.

And if we take that step further beyond acting on instructions or acting on neural network models that have been pre-trained and then sent to the edge, you take one step beyond that, and that is, you want the edge devices to also learn on their own from the data they have collected. However, knowing that the data collected is biased to what they are only seeing, swarm learning will be needed in a peer-to-peer way for these devices to learn from each other.

So, this interconnectedness, the peer-to-peer interconnectedness of these edge devices, requires us to rethink or change the way we think about computing. Just take for example two autonomous cars. We call them connected cars to start with. Two connected cars, one in front of the other by 300 yards or 300 meters. The one in front, with lots of sensors in it, say for example in the shock absorbers, senses a pothole. And it actually can offer that sensed data that there is a pothole coming up to the cars behind. And if the cars behind switch on to automatically accept these, that pothole shows up on the car behind’s dashboard. And the car behind just pays maybe 0.10 cent for that information to the car in front.

So, you get a situation where you get these peer-to-peer sharing, in real time, without needing to send all that data first back to some central location and then send back down then the new information to the car behind. So, you want it to be peer-to-peer. So more and more, I’m not saying this is implemented yet, but this gives you an idea of how thinking can change going forward. A lot more peer-to-peer sharing, and a lot more peer-to-peer learning.

Laurel: When you think about how long we’ve worked in the technology industry to think that peer-to-peer as a phrase has come back around, where it used to mean people or even computers sharing various bits of information over the internet. Now it is devices and sensors sharing bits of information with each other. Sort of a different definition of peer-to-peer.

Eng Lim: Yeah. Thinking is changing. And peer, the word peer, peer-to-peer, meaning it has the connotation of a more equitable sharing in there. That’s the reason why a blockchain is needed in some of these cases so that there is no central custodian to average the learnings, to combine the learnings. So you want a true peer-to-peer environment. And that’s what swarm learning is built for. And now the reason for that, it’s not because we feel peer-to-peer is the next big thing and therefore we should do it. It is because of data and the proliferation of these devices that are collecting data.

Imagine tens of billions of these out there, and every one of these devices getting to be smarter and consuming less energy to be that smart and moving from one where they follow instructions or infer from the pre-trained neural network model given to them, to one where they can even advance towards learning on their own. But knowing that these devices are so many of them out there, therefore each of them are only seeing a small portion. Small is still big if you combine that all of them, 50 billion of them. But each of them is only seeing a small portion of data. And therefore, if they just learn in isolation, they’ll be highly biased towards what they’re seeing. As such, there must be some way where they can share their learnings without having to share their private data. And therefore, swarm learning. As opposed to backhauling all that data from the 50 billion edge devices back to these cloud locations, the data center locations, so they can do the combined learning.

Laurel: Which would cost certainly more than a fraction of a cent.

Eng Lim: Oh yeah. There is a saying, bandwidth, you pay for. Latency, you sweat for. So it’s cost. Bandwidth is cost.

Laurel: So as an expert in artificial intelligence, while we have you here, what are you most excited about in the coming years? What are you seeing that you’re thinking, that is going to be something big in the next five, 10 years?

Eng Lim:

Thank you, Laurel. I don’t see myself as an expert in AI, but a person that is being tasked and excited about working with customers on AI use cases and learning from them. The diversity of these different AI use cases and learning from them–some leading teams directly working on the projects and overseeing some of the projects. But in terms of the excitement, actually may seem mundane. And that is, the exciting part is that I see AI. The ability for smart systems to learn and adapt, and in many cases, provide decision support to humans. And in other more limited cases, make decisions in support of humans. The proliferation of AI is in everything we do, many things we do—certain things maybe we should limit—but in many things we do.

I mean, let’s just use the most basic of examples. How this progression could be. Let’s take a light switch. In the early days, even until today, the most basic light switch is one where it is manual. A human goes ahead, throws the switch on, and the light comes on. And throws the switch off, and the light goes off. Then we move on to the next level. If you want an analogy, more next level, where we automate that switch. We put a set of instructions on that switch with a light meter, and set the instructions to say, if the lighting in this room drops to 25% of its peak, switch on. So basically, we gave an instruction with a sensor to go with it, to the switch. And then the switch is now automatic. And then when the lighting in the room drops to 25% of its peak, of the peak illumination, it switches on the lights. So now the switch is automated.

Now we can even take a step further in that automation, by making the switch smart, in that it can have more sensors. And then through the combinations of sensors, make decisions as to whether the switch the light on. And the control all these sensors, we built a neural network model that has been pre-trained separately, and then downloaded onto the switch. This is where we are at today. The switch is now smart. Smart city, smart street lights, autonomous cars, and so on.

Now, is there another level beyond that? There is. And that is when the switch not just follows instructions or not just have a trained neural network model to decide in a way to combine all the different sensor data, to decide when to switch the light on in a more precise way. It advances further to one where it learns. That’s the key word. It learns from mistakes. What would be the example? The example would be, based on the neural network model it has, that was pre-trained previously, downloaded onto the switch, with all the settings. It turns the light on. But when the human comes in, the human says I don’t need the light on here this time around, the human switches the light off. Then the switch realizes that it actually made a decision that the human didn’t like. So after a few of these, it starts to adapt itself, learn from these. Adapt itself so that you can switch a light on to the changing human preferences. That’s the next step where you want edge devices that are collecting data at the edge to learn from those.

Then of course, if you take that even further, all the switches in this office or in a residential unit, learn from each other. That will be swarm learning. So if you then extend the switch to toasters, to fridges, to cars, to industrial robots and so on, you will see that doing this, we will clearly reduce energy consumption, reduce waste, and improve productivity. But the key must be, for human good.

Laurel: And what a wonderful way to end our conversation. Thank you so much for joining us on the Business Lab.

Eng Lim: Thank you Laurel. Much appreciated.

Laurel: That was Dr. Eng Lim Goh, senior vice president and CTO of artificial intelligence at Hewlett Packard Enterprise, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab, I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

Tech

Companies hoping to grow carbon-sucking kelp may be rushing ahead of the science

Published

on

kelp forest off California coast


In late January, Elon Musk tweeted that he planned to give $100 million to promising carbon removal technologies, stirring the hopes of researchers and entrepreneurs.

A few weeks later, Arin Crumley, a filmmaker who went on to develop electric skateboards, announced that a team was forming on Clubhouse, the audio app popular in Silicon Valley, to compete for a share of the Musk-funded XPrize.

A group of artists, designers, and engineers assembled there and discussed a variety of possible natural and technical means of sucking carbon dioxide out of the atmosphere. As the conversations continued and a core team coalesced, they formed a company, Pull To Refresh, and eventually settled on growing giant bladder kelp in the ocean.

So far, the venture’s main efforts include growing the seaweed in a tank and testing their control systems on a small fishing boat on a Northern California lake. But it’s already encouraging companies to “get in touch” if they’re interested in purchasing tons of sequestered CO2, as a way to balance out their greenhouse-gas emissions.

Crumley says that huge fleets of semi-autonomous vessels growing kelp could suck up around a trillion tons of carbon dioxide and store it away in the depths of the sea, effectively reversing climate change. “With a small amount of open ocean,” he says, “we can get back to preindustrial levels” of atmospheric carbon dioxide.

‘No one knows’

Numerous studies show the world may need to remove billions of tons of carbon dioxide a year from the atmosphere by midcentury to prevent dangerous levels of warming or bring the planet back from them. In addition, more and more corporations are scouring the market for carbon credits that allow them to offset their emissions and claim progress toward the goal of carbon neutrality.

All of that has spurred a growing number of companies, investors, and research groups to explore carbon removal approaches that range from planting trees to grinding up minerals to building giant C02-sucking factories.

Kelp has become an especially active area of inquiry and investment because there’s already an industry that cultivates it on a large scale—and the theoretical carbon removal potential is significant. An expert panel assembled by the Energy Futures Initiative estimated that kelp has the capacity to pull down about 1 billion to 10 billion tons of carbon dioxide per year.

But scientists are still grappling with fundamental questions about this approach. How much kelp can we grow? What will it take to ensure that most of the seaweed sinks to the bottom of the ocean? And how much of the carbon will stay there long enough to really help the climate?

In addition, no one knows what the ecological impact of depositing billions of tons of dead biomass on sea floor would be.

“We just have zero experience with perturbing the bottom of the ocean with that amount of carbon,” says Steven Davis, an associate professor at the University of California, Irvine, who is analyzing the economics of various uses of kelp. “I don’t think anybody has a great idea what it will mean to actively intervene in the system at that scale.”

The scientific unknowns, however, haven’t prevented some ventures from rushing ahead, making bold promises and aiming to sell carbon credits. If the practice doesn’t sequester as much carbon as claimed it could slow or overstate progress on climate change, as the companies buying those credits carry on emitting on the false promise that the oceans are balancing out that pollution, ton for ton.

“For the field as a whole, I think, having this research done by universities in partnership with government scientists and national labs would go a long way toward establishing a basic level of trust before we’re commercializing some of this stuff,” says Holly Buck, an assistant professor at the University at Buffalo, who is studying the social implications of ocean-based carbon removal.

The lure of the ocean

Swaying columns of giant kelp line the rocky shores of California’s Monterey Bay, providing habitat and hunting grounds for rockfish, sea otters, and urchins. The brown macroalgae draws on sunlight, carbon dioxide, and nutrients in the cool coastal waters to grow up to two feet a day. The forests continually shed their blades and fronds, and the seaweed can be knocked loose entirely by waves and storms.

In the late 1980s, researchers at the Monterey Bay Aquarium began a series of experiments to determine where all that seaweed ends up. They attached radio transmitters to large floating rafts of kelp and scanned the ocean depths with remote-operated submarines.

An underwater kelp forest off the coast of California.

GETTY

The scientists estimated that the forests released more than 130,000 tons of kelp each year. Most of the rafts of kelp washed up on shore within the bay in a matter of days. But in the underwater observations, they found bundles of seaweed lining the walls and floor of an adjacent underwater gully known as the Carmel Submarine Canyon, hundreds of meters below the surface.

Scientists have spotted similar remnants of kelp on the deep ocean floors in coastal pockets throughout the world. And it’s clear that some of that carbon in the biomass stays down for millennia, because kelp is a known source of oil deposits.

A 2016 paper published in Nature Geoscience estimated that seaweed may naturally sequester nearly 175 million tons of carbon around the world each year as it sinks into the deep sea or drifts into submarine canyons.

That translates to well below the levels of carbon dioxide that the world will likely need to remove annually by midcentury—let alone the amounts envisioned by Crumley and his team. Which is why Pull To Refresh and other companies are exploring ways to radically scale up the growth of kelp, on offshore vessels or elsewhere.

Reaching the deep seas

But how much of the carbon will remain trapped below the surface and for how long?

Certain species of seaweed, like giant bladder kelp, have tiny gas bladders on their blades, enabling the macroalgae to collect more of the sunlight necessary to drive photosynthesis. The bladders can also keep the remnants or rafts afloat for days or longer depending on the species, helping currents carry dislodged kelp to distant shores.

When the carbon in kelp decomposes on land, or turns into dissolved inorganic carbon dioxide in shallow seawater, it can return to the atmosphere, says David Koweek, science director at Ocean Visions, a research organization that partners with institutions like MIT, Stanford, and the Monterey Bay Aquarium Research Institute. The carbon may also be released if marine creatures digest the kelp in the upper oceans.

But some kelp sinks into the deep ocean as well. Bladders degrade. Storms push the seaweed down so deep that they deflate. Certain species are naturally nonbuoyant. And some amount that breaks free below the surface stays there and may drift down into deeper waters through underwater canyons, like the one off the coast of Monterey.

Ocean circulation models suggest much of the carbon in biomass that reaches great depths of the oceans could remain there for very long times, because the overturning patterns that bring deep waters toward the surface operate so slowly. Below 2,100 meters, for instance, the median sequestration time would exceed 750 years across major parts of the North Pacific, according to a recent paper in Environmental Research Letters.

All of which suggests that deliberately sinking seaweed could store away carbon long enough to ease some of the pressures of climate change. But it will matter a lot where it’s done, and what efforts are taken to ensure that most of the biomatter reaches the deep ocean.

For-profit plans

Pull To Refresh’s plan is to develop semi-autonomous vessels equipped with floats, solar panels, cameras, and satellite antennas, enabling the crafts to adjust their steering and speed to arrive at designated points in the open ocean.

Each of these so-called Canaries will also tow a sort of underwater trellis made of steel wire, known as the Tadpole, tethering together vases in which giant bladder kelp can grow. The vessel will feed the seaweed through tubes from an onboard tank of micronutrients.

drone and boat at sunset
Pull To Refresh has tested its control systems on a fishing boat on a lake in Northern California.

COURTESY: PULL TO REFRESH

Eventually, Crumley says, the kelp will die, fall off, and naturally make its way down to the bottom of the ocean. By putting the vessels far from the coast, the company believes, it can address the risk that the dead seaweed will wash up on shore.

Pull To Refresh has already begun discussions with companies about purchasing “kelp tonnes” from the seaweed it’ll eventually grow.

“We need a business model that works now-ish or as soon as possible,” Crumley says. “The ones we’re talking to are forgiving; they understand that it’s in its infancy. So we will be up-front about anything we don’t know about. But we’ll keep deploying these Canaries until we’ve got enough tonnes to close out your order.”

Crumley said in an email that the company will have two years to get the carbon accounting for its process approved by a third-party accreditor, as part of any transition. He said the company is conducting internal environmental impact efforts, talking to at least one carbon removal registry and that it hopes to receive input from outside researchers working on these issues.

“We are never going to sell a tonne that isn’t third-party verified simply because we don’t want to be a part of anything that could even just sound shady,” he wrote.

‘Scale beyond any other’

Other ventures are taking added steps to ensure that the kelp sinks, and to coordinate with scientific experts in the field.

Running Tide, an aquaculture company based in Portland, Maine, is carrying out field tests in the North Atlantic to determine where and how various types of kelp grow best under a variety of conditions. The company is primarily focused on nonbuoyant species of macroalgae and has also been developing biodegradable floats.

The company isn’t testing sinking yet, but the basic concept is that the floats will break down as the seaweed grows in the ocean. After about six to nine months, the whole thing should readily sink to the bottom of the ocean and stay there.

Marty Odlin, chief executive of Running Tide, stresses that the company is working with scientists to ensure they’re evaluating the carbon removal potential of kelp in rigorous and appropriate ways.

Ocean Visions helped establish a scientific advisory team to guide the company’s field trials, made up of researchers from the Monterey Bay Aquarium Research Institute, UC Santa Barbara, and other institutions. The company is also coordinating with the Centre for Climate Repair at Cambridge on efforts to more precisely determine how much carbon the oceans can take up through these sorts of approaches.

Running Tide plans to carry out tests for at least two and a half years to develop a “robust data set” on the effects of these practices.

“At that point, the conclusion might be we need more data or this doesn’t work or it’s ready to go,” Odlin says.

The company has high hopes for what it might achieve, stating on its website: “Growing kelp and sinking it in the deep ocean is a carbon sequestration solution that can scale beyond any other.”

Running Tide has raised millions of dollars from Venrock, Lowercarbon Capital, and other investors. The tech companies Shopify and Stripe have both provided funds as well, purchasing future carbon dioxide removal at high prices ($250 a ton in Stripe’s case) to help fund research and development efforts.

Several other companies and nonprofits are also exploring ways to sequester carbon dioxide from seaweed. That includes the Climate Foundation, which is selling a $125, blockchain-secured “kelp coin” to support its broader research efforts to increase kelp production for food and other purposes.

The risks

Some carbon removal experts fear that market forces could propel kelp-sinking efforts forward, whatever the research finds about its effectiveness or risks. The companies or nonprofits doing it will have financial incentives to sell credits. Investors will want to earn their money back. Corporate demand for sources of carbon credits is skyrocketing. And offset registries, which earn money by providing a stamp of approval for carbon credit programs, have a clear stake in adding a new category to the carbon marketplace.

One voluntary offset registry, Verra, is already developing a protocol for carbon removal through seagrass cultivation and is “actively watching” the kelp space, according to Yale Environment 360.

We’ve already seen these pressures play out with other approaches to offset credits, says Danny Cullenward, policy director at CarbonPlan, a nonprofit that assesses the scientific integrity of carbon removal efforts.

CarbonPlan and other research groups have highlighted excessive crediting and other problems with programs designed to incentivize, measure, and verify emissions avoided or carbon removal achieved through forest and soil management practices. Yet the carbon credit markets continue to grow as nations and corporations look for ways to offset their ongoing emissions, on paper if not in the atmosphere.

Sinking seaweed to the bottom of the ocean creates especially tricky challenges in verifying that the carbon removal is really happening. After all, it’s far easier to measure trees than it will be to track the flow of carbon dissolved in the deep ocean. That means any carbon accounting system for kelp will rely heavily on models that determine how much carbon should stay under the surface for how long in certain parts of the ocean, under certain circumstances. Getting the assumptions right will be critical to the integrity of any eventual offset program—and any corporate carbon math that relies on them.

Some researchers also worry about the ecological impact of seaweed sinking.

Wil Burns, a visiting professor focused on carbon removal at Northwestern University and a member of Running Tide’s advisory board, notes that growing enough kelp to achieve a billion tons of carbon removal could require millions of buoys in the oceans.

Those floating forests could block the migration paths of marine mammals. Creatures could also hitch aboard the buoys or the vessels delivering them, potentially introducing invasive species into different areas. And the kelp forests themselves could create “gigantic new sushi bars,” Burns says, perhaps tipping food chains in ways that are hard to predict.

The addition of that much biomatter and carbon into the deep ocean could alter the biochemistry of the waters, too, and that could have cascading effects on marine life.

“If you’re talking about an approach that could massively alter ocean ecosystems, do you want that in the hands of the private sector?” Burns says.

Running Tide’s Odlin stresses that he has no interest in working on carbon removal methods that don’t work or that harm the oceans. He says the reason he started looking into kelp sinking was that he witnessed firsthand how climate change was affecting marine ecosystems and fish populations.

“I’m trying to fix that problem,” he says. “If this activity doesn’t fix that problem, I’ll go work on something else that will.”

Scaling up

Scaling up kelp-based carbon removal from the hundreds of millions of tons estimated to occur naturally to the billions of tons needed will also face some obvious logistical challenges, says John Beardall, an emeritus professor at Monash University in Australia, who has studied the potential and challenges of seaweed cultivation.

For one, only certain parts of the world offer suitable habitat for most kelp. Seaweed largely grows in relatively shallow, cool, nutrient-rich waters along rocky coastlines.

Expanding kelp cultivation near shore will be constrained by existing uses like shipping, fishing, marine protected areas, and indigenous territories, Ocean Visions notes in a “state of technology” assessment. Moving it offshore, with rafts or buoys, will create engineering challenges and add costs.

Moreover, companies may have to overcome legal complications if their primary purpose will be sinking kelp on large, commercial scales. There are complex and evolving sets of rules under treaties like the London Convention and the London Protocol that prevent dumping in the open oceans and regulate “marine geoengineering activities” designed to counteract climate change. 

Commercial efforts to move ahead with sinking seaweed in certain areas could be subject to permitting requirements under a resolution of the London Convention, or run afoul of at least the spirit of the rule if they move ahead without environmental assessments, Burns says.

Climate change itself is already devastating kelp forests in certain parts of the world as well, Beardall noted in an email. Warming waters coupled with a population explosion of sea urchins that feed on seaweed have decimated the kelp forests along California’s coastline. The giant kelp forests along Tasmania have also shrunk by about 95% in recent years.

“This is not to say that we shouldn’t look to seaweed harvest and aquaculture as one approach to CO2 sequestration,” Beardall wrote. “But I simply want to make the point that is not going to be a major route.”

Other, better uses

Another question is simply whether sinking seaweed is the best use of it.

It’s a critical food and income source for farmers across significant parts of Asia, and one that’s already under growing strains as climate change accelerates. It’s used in pharmaceuticals, food additives, and animal feed. And it could be employed in other applications that tie up the carbon, like bioplastics or biochar that enriches soils.

“Sustainably farmed seaweed is a valuable product with a very wide range of uses … and a low environmental footprint,” said Dorte Krause-Jensen, a professor at Aarhus University in Denmark who has studied kelp carbon sequestration, in an email. “In my opinion it would be a terrible waste to dump the biomass into the deep sea.”

UC Irvine’s Davis has been conducting a comparative economic analysis of various ways of putting kelp to use, including sinking it, converting it to potentially carbon-neutral biofuels, or using it as animal feed. The preliminary results show that even if every cost was at the lowest end of the ranges, seaweed sinking could run around $200 a ton, which is more than double the long-term, low-end cost estimates for carbon-sucking factories.

Davis says those costs would likely drive kelp cultivators toward uses with higher economic value. “I’m more and more convinced that the biggest climate benefits of farmed kelp won’t involve sinking it,” he says. 

‘Get it done’

Pull To Refresh’s Crumley says he and his team hope to begin testing a vessel in the ocean this year. If it works well, they plan to attach baby kelp to the Tadpole and “send it on its voyage,” he says.

He disputed the argument that companies should hold off on selling tons now on the promise of eventual carbon removal. He says that businesses need the resources to develop and scale up these technologies, and that government grants won’t get the field where it needs to be.

“We’ve just decided to get it done,” he says. “If, in the end, we’re wrong, we’ll take responsibility for any mistakes. But we think this is the right move.”

It’s not clear, however, how such a startup could take responsibility for mistakes if the activities harm marine ecosystems. And at least for now, there are no clear mechanisms that would hold companies accountable for overestimating carbon removal through kelp.



Continue Reading

Tech

Activists are helping Texans get access to abortion pills online

Published

on

Activists are helping Texans get access to abortion pills online


The process only requires an internet connection: patients go online and answer some HIPAA-compliant questions about their pregnancy, such as when the first day of their last period was. If it’s a straightforward case, it’s approved by the doctor—there are seven American doctors covering 15 states—and the medication arrives in a few days. In places like Texas, where Aid Access doesn’t have doctors in state, Aid Access founder Rebecca Gomperts prescribes the medication from Europe, where she is based. That can take around three weeks, Pitney says. 

The ability to get a safe, discreet abortion at home with just an internet connection could be life-changing for Texans and others in need. “It’s really changed the face of abortion access,” says Elisa Wells, the cofounder of Plan C, which provides information and education about how to access the pills.

In Texas, the need is especially acute because cultural stigma and an existing history of restrictive laws means there are very few in-person clinics available. Before the recent law change, Texans were three times more likely than the national average to use abortion pills, because abortion clinics were so far away. 

“In a situation like Texas, where mainstream avenues of access have been almost entirely cut off, it is a solution,” says Wells, who describes much of Texas as an “abortion desert.” Black and Hispanic people often have less access to medical care, and so the ability to access abortion pills online is vital for these communities.

They’re also much cheaper than medical abortions, with most pills costing $105 to $150 plus a required online consultation, depending on which state you live in. (Aid Access forgives some or all of the payment if necessary.) 

But while they’re commonly prescribed in other countries (they’re used in around 90% of abortions in France and Scotland, for example), only 40% of American abortions use pills. In fact, using the pills in the US to “self-manage an abortion” can lead to charges in at least 20 states, including Texas, and has been the basis for the arrest of 21 people since 2000. Aid Access’s use of Gomperts to write prescriptions as a foreign doctor has come under federal investigation by the FDA, which the group challenged. The situation remains unresolved. 

Continue Reading

Tech

Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows

Published

on

Troll farms reached 140 million Americans a month on Facebook before 2020 election, internal report shows


Joe Osborne, a Facebook spokesperson, said in a statement that the company “had already been investigating these topics” at the time of Allen’s report, adding: “Since that time, we have stood up teams, developed new policies, and collaborated with industry peers to address these networks. We’ve taken aggressive enforcement actions against these kinds of foreign and domestic inauthentic groups and have shared the results publicly on a quarterly basis.”

In the process of fact-checking this story shortly before publication, MIT Technology Review found that five of the troll-farm pages mentioned in the report remained active.

This is the largest troll-farm page targeting African-Americans in October 2019. It still remains active on Facebook.

The report found that troll farms were reaching the same demographic groups singled out by the Kremlin-backed Internet Research Agency (IRA) during the 2016 election, which had targeted Christians, Black Americans, and Native Americans. A 2018 BuzzFeed News investigation found that at least one member of the Russian IRA, indicted for alleged interference in the 2016 US election, had also visited Macedonia around the emergence of its first troll farms, though it didn’t find concrete evidence of a connection. (Facebook said its investigations hadn’t turned up a connection between the IRA and Macedonian troll farms either.)

“This is not normal. This is not healthy,” Allen wrote. “We have empowered inauthentic actors to accumulate huge followings for largely unknown purposes … The fact that actors with possible ties to the IRA have access to huge audience numbers in the same demographic groups targeted by the IRA poses an enormous risk to the US 2020 election.”

As long as troll farms found success in using these tactics, any other bad actor could too, he continued: “If the Troll Farms are reaching 30M US users with content targeted to African Americans, we should not at all be surprised if we discover the IRA also currently has large audiences there.”

Allen wrote the report as the fourth and final installment of a year-and-a-half-long effort to understand troll farms. He left the company that same month, in part because of frustration that leadership had “effectively ignored” his research, according to the former Facebook employee who supplied the report. Allen declined to comment.

The report reveals the alarming state of affairs in which Facebook leadership left the platform for years, despite repeated public promises to aggressively tackle foreign-based election interference. MIT Technology Review is making the full report available, with employee names redacted, because it is in the public interest.

Its revelations include:

  • As of October 2019, around 15,000 Facebook pages with a majority US audience were being run out of Kosovo and Macedonia, known bad actors during the 2016 election.
  • Collectively, those troll-farm pages—which the report treats as a single page for comparison purposes—reached 140 million US users monthly and 360 million global users weekly. Walmart’s page reached the second-largest US audience at 100 million.
  • The troll farm pages also combined to form:
    • the largest Christian American page on Facebook, 20 times larger than the next largest—reaching 75 million US users monthly, 95% of whom had never followed any of the pages.
    • the largest African-American page on Facebook, three times larger than the next largest—reaching 30 million US users monthly, 85% of whom had never followed any of the pages.
    • the second-largest Native American page on Facebook, reaching 400,000 users monthly, 90% of whom had never followed any of the pages.
    • the fifth-largest women’s page on Facebook, reaching 60 million US users monthly, 90% of whom had never followed any of the pages.
  • Troll farms primarily affect the US but also target the UK, Australia, India, and Central and South American countries.
  • Facebook has conducted multiple studies confirming that content more likely to receive user engagement (likes, comments, and shares) is more likely of a type known to be bad. Still, the company has continued to rank content in user’s newsfeeds according to what will receive the highest engagement.
  • Facebook forbids pages from posting content merely copied and pasted from other parts of the platform but does not enforce the policy against known bad actors. This makes it easy for foreign actors who do not speak the local language to post entirely copied content and still reach a massive audience. At one point, as many as 40% of page views on US pages went to those featuring primarily unoriginal content or material of limited originality.
  • Troll farms previously made their way into Facebook’s Instant Articles and Ad Breaks partnership programs, which are designed to help news organizations and other publishers monetize their articles and videos. At one point, thanks to a lack of basic quality checks, as many as 60% of Instant Article reads were going to content that had been plagiarized from elsewhere. This made it easy for troll farms to mix in unnoticed, and even receive payments from Facebook.

How Facebook enables troll farms and grows their audiences

The report looks specifically at troll farms based in Kosovo and Macedonia, which are run by people who don’t necessarily understand American politics. Yet because of the way Facebook’s newsfeed reward systems are designed, they can still have a significant impact on political discourse.

Continue Reading

Copyright © 2020 Diliput News.