Connect with us

Tech

Stop talking about AI ethics. It’s time to talk about power.

Published

on

Stop talking about AI ethics. It’s time to talk about power.


In doing that,, I wanted to really open up this understanding of AI as neither artificial nor intelligent. It’s the opposite of artificial. It comes from the most material parts of the Earth’s crust and from human bodies laboring, and from all of the artifacts that we produce and say and photograph every day. Neither is it intelligent. I think there’s this great original sin in the field, where people assumed that computers are somehow like human brains and if we just train them like children, they will slowly grow into these supernatural beings.

That’s something that I think is really problematic—that we’ve bought this idea of intelligence when in actual fact, we’re just looking at forms of statistical analysis at scale that have as many problems as the data that it’s given.

Was it immediately obvious to you that this is how people should be thinking about AI? Or was it a journey?

It’s absolutely been a journey. I’d say one of the turning points for me was back in 2016, when I started a project called “Anatomy of an AI system” with Vladan Joler. We met at a conference specifically about voice-enabled AI, and we were trying to effectively draw what it takes to make an Amazon Echo work. What are the components? How does it extract data? What are the layers in the data pipeline?

We realized, well—actually, to understand that, you have to understand where the components come from. Where did the chips get produced? Where are the mines? Where does it get smelted? Where are the logistical and supply chain paths?

Finally, how do we trace the end of life of these devices? How do we look at where the e-waste tips are located in places like Malaysia and Ghana and Pakistan? What we ended up with was this very time-consuming two-year research project to really trace those material supply chains from cradle to grave.

When you start looking at AI systems on that bigger scale, and on that longer time horizon, you shift away from these very narrow accounts of “AI fairness” and “ethics” to saying: these are systems that produce profound and lasting geomorphic changes to our planet, as well as increase the forms of labor inequality that we already have in the world.

So that made me realize that I had to shift from an analysis of just one device, the Amazon Echo, to applying this sort of analytic to the entire industry. That to me was the big task, and that’s why Atlas of AI took five years to write. There’s such a need to actually see what these systems really cost us, because we so rarely do the work of actually understanding their true planetary implications.

The other thing I would say that’s been a real inspiration is the growing field of scholars who are asking these bigger questions around labor, data, and inequality. Here I’m thinking of Ruha Benjamin, Safiya Noble, Mar Hicks, Julie Cohen, Meredith Broussard, Simone Brown—the list goes on. I see this as a contribution to that body of knowledge by bringing in perspectives that connect the environment, labor rights, and data protection.

You travel a lot throughout the book. Almost every chapter starts with you actually looking around at your surroundings. Why was this important to you?

It was a very conscious choice to ground an analysis of AI in specific places, to move away from these abstract “nowheres” of algorithmic space, where so many of the debates around machine learning happen. And hopefully it highlights the fact that when we don’t do that, when we just talk about these “nowhere spaces” of algorithmic objectivity, that is also a political choice, and it has ramifications.

In terms of threading the locations together, this is really why I started thinking about this metaphor of an atlas, because atlases are unusual books. They’re books that you can open up and look at the scale of an entire continent, or you can zoom in and look at a mountain range or a city. They give you these shifts in perspective and shifts in scale.

There’s this lovely line that I use in the book from the physicist Ursula Franklin. She writes about how maps join together the known and the unknown in these methods of collective insight. So for me, it was really drawing on the knowledge that I had, but also thinking about the actual locations where AI is being constructed very literally from rocks and sand and oil.

What kind of feedback has the book received?

One of the things that I’ve been surprised by in the early responses is that people really feel like this kind of perspective was overdue. There’s a moment of recognition that we need to have a different sort of conversation than the ones that we’ve been having over the last few years.

We’ve spent far too much time focusing on narrow tech fixes for AI systems and always centering technical responses and technical answers. Now we have to contend with the environmental footprint of the systems. We have to contend with the very real forms of labor exploitation that have been happening in the construction of these systems.

And we also are now starting to see the toxic legacy of what happens when you just rip out as much data off the internet as you can, and just call it ground truth. That kind of problematic framing of the world has produced so many harms, and as always, those harms have been felt most of all by communities who were already marginalized and not experiencing the benefits of those systems.

What do you hope people will start to do differently?

I hope it’s going to be a lot harder to have these cul-de-sac conversations where terms like “ethics” and “AI for good” have been so completely denatured of any actual meaning. I hope it pulls aside the curtain and says, let’s actually look at who’s running the levers of these systems. That means shifting away from just focusing on things like ethical principles to talking about power.

How do we move away from this ethics framing?

Tech

Decarbonizing industries with connectivity and 5G

Published

on

Decarbonizing industries with connectivity and 5G


The United Nations Intergovernmental Panel on Climate Change’s sixth climate change report—an aggregated assessment of scientific research prepared by some 300 scientists across 66 countries—has served as the loudest and clearest wake-up call to date on the global warming crisis. The panel unequivocally attributes the increase in the earth’s temperature—it has risen by 1.1 °C since the Industrial Revolution—to human activity. Without substantial and immediate reductions in carbon dioxide and other greenhouse gas emissions, temperatures will rise between 1.5 °C and 2 °C before the end of the century. That, the panel posits, will lead all of humanity to a “greater risk of passing through ‘tipping points,’ thresholds beyond which certain impacts can no longer be avoided even if temperatures are brought back down later on.”

Corporations and industries must therefore redouble their greenhouse gas emissions reduction and removal efforts with speed and precision—but to do this, they must also commit to deep operational and organizational transformation. Cellular infrastructure, particularly 5G, is one of the many digital tools and technology-enabled processes organizations have at their disposal to accelerate decarbonization efforts.  

5G and other cellular technology can enable increasingly interconnected supply chains and networks, improve data sharing, optimize systems, and increase operational efficiency. These capabilities could soon contribute to an exponential acceleration of global efforts to reduce carbon emissions.

Industries such as energy, manufacturing, and transportation could have the biggest impact on decarbonization efforts through the use of 5G, as they are some of the biggest greenhouse-gas-emitting industries, and all rely on connectivity to link to one another through communications network infrastructure.

The higher performance and improved efficiency of 5G—which delivers higher multi-gigabit peak data speeds, ultra-low latency, increased reliability, and increased network capacity—could help businesses and public infrastructure providers focus on business transformation and reduction of harmful emissions. This requires effective digital management and monitoring of distributed operations with resilience and analytic insight. 5G will help factories, logistics networks, power companies, and others operate more efficiently, more consciously, and more purposely in line with their explicit sustainability objectives through better insight and more powerful network configurations.

This report, “Decarbonizing industries with connectivity & 5G,” argues that the capabilities enabled by broadband cellular connectivity primarily, though not exclusively, through 5G network infrastructure are a unique, powerful, and immediate enabler of carbon reduction efforts. They have the potential to create a transformational acceleration of decarbonization efforts, as increasingly interconnected supply chains, transportation, and energy networks share data to increase efficiency and productivity, hence optimizing systems for lower carbon emissions.

Explore more.

Continue Reading

Tech

Surgeons have successfully tested a pig’s kidney in a human patient

Published

on

Surgeons have successfully tested a pig’s kidney in a human patient


The reception: The research was conducted last month and is yet to be peer reviewed or published in a journal, but external experts say it represents a major advance. “There is no doubt that this is a highly significant breakthrough,” says Darren K. Griffin, a professor of genetics at the University of Kent, UK. “The research team were cautious, using a patient who had suffered brain death, attaching the kidney to the outside of the body, and closely monitoring for only a limited amount of time. There is thus a long way to go and much to discover,” he added. 

“This is a huge breakthrough. It’s a big, big deal,” Dorry Segev, a professor of transplant surgery at Johns Hopkins School of Medicine who was not involved in the research, told the New York Times. However, he added, “we need to know more about the longevity of the organ.”

The background: In recent years, research has increasingly zeroed in on pigs as the most promising avenue to help address the shortage of organs for transplant, but it has faced a number of obstacles, most prominently the fact that a sugar in pig cells triggers an aggressive rejection response in humans.

The researchers got around this by genetically altering the donor pig to knock out the gene encoding the sugar molecule that causes the rejection response. The pig was genetically engineered by Revivicor, one of several biotech companies working to develop pig organs to transplant into humans. 

The big prize: There is a dire need for more kidneys. More than 100,000 people in the US are currently waiting for a kidney transplant, and 13 die of them every day, according to the National Kidney Foundation. Genetically engineered pigs could offer a crucial lifeline for these people, if the approach tested at NYU Langone can work for much longer periods.

Continue Reading

Tech

Getting value from your data shouldn’t be this hard

Published

on

Getting value from your data shouldn’t be this hard


The potential impact of the ongoing worldwide data explosion continues to excite the imagination. A 2018 report estimated that every second of every day, every person produces 1.7 MB of data on average—and annual data creation has more than doubled since then and is projected to more than double again by 2025. A report from McKinsey Global Institute estimates that skillful uses of big data could generate an additional $3 trillion in economic activity, enabling applications as diverse as self-driving cars, personalized health care, and traceable food supply chains.

But adding all this data to the system is also creating confusion about how to find it, use it, manage it, and legally, securely, and efficiently share it. Where did a certain dataset come from? Who owns what? Who’s allowed to see certain things? Where does it reside? Can it be shared? Can it be sold? Can people see how it was used?

As data’s applications grow and become more ubiquitous, producers, consumers, and owners and stewards of data are finding that they don’t have a playbook to follow. Consumers want to connect to data they trust so they can make the best possible decisions. Producers need tools to share their data safely with those who need it. But technology platforms fall short, and there are no real common sources of truth to connect both sides.

How do we find data? When should we move it?

In a perfect world, data would flow freely like a utility accessible to all. It could be packaged up and sold like raw materials. It could be viewed easily, without complications, by anyone authorized to see it. Its origins and movements could be tracked, removing any concerns about nefarious uses somewhere along the line.

Today’s world, of course, does not operate this way. The massive data explosion has created a long list of issues and opportunities that make it tricky to share chunks of information.

With data being created nearly everywhere within and outside of an organization, the first challenge is identifying what is being gathered and how to organize it so it can be found.

A lack of transparency and sovereignty over stored and processed data and infrastructure opens up trust issues. Today, moving data to centralized locations from multiple technology stacks is expensive and inefficient. The absence of open metadata standards and widely accessible application programming interfaces can make it hard to access and consume data. The presence of sector-specific data ontologies can make it hard for people outside the sector to benefit from new sources of data. Multiple stakeholders and difficulty accessing existing data services can make it hard to share without a governance model.

Europe is taking the lead

Despite the issues, data-sharing projects are being undertaken on a grand scale. One that’s backed by the European Union and a nonprofit group is creating an interoperable data exchange called Gaia-X, where businesses can share data under the protection of strict European data privacy laws. The exchange is envisioned as a vessel to share data across industries and a repository for information about data services around artificial intelligence (AI), analytics, and the internet of things.

Hewlett Packard Enterprise recently announced a solution framework to support companies, service providers, and public organizations’ participation in Gaia-X. The dataspaces platform, which is currently in development and based on open standards and cloud native, democratizes access to data, data analytics, and AI by making them more accessible to domain experts and common users. It provides a place where experts from domain areas can more easily identify trustworthy datasets and securely perform analytics on operational data—without always requiring the costly movement of data to centralized locations.

By using this framework to integrate complex data sources across IT landscapes, enterprises will be able to provide data transparency at scale, so everyone—whether a data scientist or not—knows what data they have, how to access it, and how to use it in real time.

Data-sharing initiatives are also on the top of enterprises’ agendas. One important priority enterprises face is the vetting of data that’s being used to train internal AI and machine learning models. AI and machine learning are already being used widely in enterprises and industry to drive ongoing improvements in everything from product development to recruiting to manufacturing. And we’re just getting started. IDC projects the global AI market will grow from $328 billion in 2021 to $554 billion in 2025.

To unlock AI’s true potential, governments and enterprises need to better understand the collective legacy of all the data that is driving these models. How do AI models make their decisions? Do they have bias? Are they trustworthy? Have untrustworthy individuals been able to access or change the data that an enterprise has trained its model against? Connecting data producers to data consumers more transparently and with greater efficiency can help answer some of these questions.

Building data maturity

Enterprises aren’t going to solve how to unlock all of their data overnight. But they can prepare themselves to take advantage of technologies and management concepts that help to create a data-sharing mentality. They can ensure that they’re developing the maturity to consume or share data strategically and effectively rather than doing it on an ad hoc basis.

Data producers can prepare for wider distribution of data by taking a series of steps. They need to understand where their data is and understand how they’re collecting it. Then, they need to make sure the people who consume the data have the ability to access the right sets of data at the right times. That’s the starting point.

Then comes the harder part. If a data producer has consumers—which can be inside or outside the organization—they have to connect to the data. That’s both an organizational and a technology challenge. Many organizations want governance over data sharing with other organizations. The democratization of data—at least being able to find it across organizations—is an organizational maturity issue. How do they handle that?

Companies that contribute to the auto industry actively share data with vendors, partners, and subcontractors. It takes a lot of parts—and a lot of coordination—to assemble a car. Partners readily share information on everything from engines to tires to web-enabled repair channels. Automotive dataspaces can serve upwards of 10,000 vendors. But in other industries, it might be more insular. Some large companies might not want to share sensitive information even within their own network of business units.

Creating a data mentality

Companies on either side of the consumer-producer continuum can advance their data-sharing mentality by asking themselves these strategic questions:

  • If enterprises are building AI and machine learning solutions, where are the teams getting their data? How are they connecting to that data? And how do they track that history to ensure trustworthiness and provenance of data?
  • If data has value to others, what is the monetization path the team is taking today to expand on that value, and how will it be governed?
  • If a company is already exchanging or monetizing data, can it authorize a broader set of services on multiple platforms—on premises and in the cloud?
  • For organizations that need to share data with vendors, how is the coordination of those vendors to the same datasets and updates getting done today?
  • Do producers want to replicate their data or force people to bring models to them? Datasets might be so large that they can’t be replicated. Should a company host software developers on its platform where its data is and move the models in and out?
  • How can workers in a department that consumes data influence the practices of the upstream data producers within their organization?

Taking action

The data revolution is creating business opportunities—along with plenty of confusion about how to search for, collect, manage, and gain insights from that data in a strategic way. Data producers and data consumers are becoming more disconnected with each other. HPE is building a platform supporting both on-premises and public cloud, using open source as the foundation and solutions like HPE Ezmeral Software Platform to provide the common ground both sides need to make the data revolution work for them.

Read the original article on Enterprise.nxt.

This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Copyright © 2020 Diliput News.