Connect with us


No one can find the animal that gave people covid-19



No one can find the animal that gave people covid-19

One problem with the lab leak theory is that it presumes the Chinese are lying or hiding facts, a position incompatible with a joint scientific effort. This may have been why the WHO team, for instance, never asked to see the offline database. Peter Daszak, president of the EcoHealth Alliance, which collaborated with the Wuhan lab for many years and funded some of its work, says there is “no evidence” whatsoever to back the lab theory. “If you just firmly believe [that] what we hear from our Chinese colleagues over there in the labs is not going to be true, we will never be able to rule it out,” he said of the lab theory. “That is the problem. In its essence, that theory is not a conspiracy theory. But people have put it forward as such, saying the Chinese side conspired to cover up evidence.”

To those who believe a lab accident is likely, including Jamie Metzl, a technology and national security fellow at the Atlantic Council, the WHO team isn’t set up to carry out the sort of forensic probe he believes is necessary. “Everyone on earth is a stakeholder in this,” he says. “It’s crazy that a year into this, there is no full investigation into the origins of the pandemic.” In February, Metzl published a statement in which he said he was “appalled” by the investigators’ quick rebuttal of the lab hypothesis and called for Daszak to be removed from the team. Several days later, the WHO director general, Tedros Adhanom Ghebreyesus, appeared to rebuke the origins team in a speech in which he said, “I want to clarify that all hypotheses remain open and require further study.”

The scenario the WHO-China team said it considers most probable is the “intermediary” theory, in which a bat virus infected another wild animal that was then caught or farmed for food. The intermediary theory does have the strongest precedents. Not only is there the case of SARS, but in 2012 researchers discovered Middle East respiratory syndrome (MERS), a deadly lung infection caused by another coronavirus, and quickly traced it to dromedary camels.

The trouble with this hypothesis is that Chinese researchers have not succeeded in finding a “direct progenitor” of this virus in any animal they’ve looked at. Liang said China had tested 50,000 animal specimens, including 1,100 bats in Hubei province, where Wuhan is located. But no luck: a matching virus still hasn’t been found.

The Chinese team appears to strongly favor a twist on the intermediate-animal idea: that the virus could have reached Wuhan on a frozen food shipment that included a frozen wild animal. This “cold chain” hypothesis may have appeal because it would mean the virus came from thousands of miles away, even outside China. “We think that is a valid option,” says Marion Koopmans, a Dutch virologist who traveled with the group. She said China had tested 1.5 million frozen samples and found the virus 30 times. “That may not be surprising in the middle of an outbreak, when many people are handling these products,” Koopmans says. “But the WHO did request studies, spiked the virus onto fish, froze and thawed it, and could culture the virus. So it’s possible. You cannot rule it out.”

Blame game

The WHO-China team, in its eventual report, is expected to suggest further research that needs to be carried out. This is one reason the report matters; it may determine which questions get asked and which don’t.

There is likely to be a larger effort to trace the wild-animal trade, including supply chains of frozen products. In addition to animal evidence, Ben Embarek also said China should make a greater effort to locate people who were infected by covid-19 early on, but perhaps were asymptomatic or didn’t get tested. That could be done by hunting through samples in blood banks, using newer, more sensitive technology to locate antibodies. “We need to keep looking for material that could give insight into the early days of the events,” Ben Embarek said. As well, the report is likely to call for the creation of a master database that includes all the data collected so far.

WHO officer Peter Ben Embarek (right) and Liang Wannian shake hands after a press conference in Wuhan, China, on Feb. 9, 2021 in which they ranked four theories for how the covid-19 pandemic began.


Ultimately, in seeking the cause of the covid-19 disaster, we don’t just want to know what happened. We’re also looking for something—or someone—to blame. And each hypothesis points to a different culprit. To ecologists, the lesson of the pandemic is nearly a foregone conclusion: humans should stop encroaching on wild areas. “We have come to recognize how this kind of investigation is not just about illness in humans—nor indeed just about an interface between humans and animals—but feeds into an altogether wider discussion about how we use the world,” says John Watson, the British epidemiologist.

The Chinese authorities, meanwhile, are already taking action on the intermediary theory by putting responsibility on wild-animal farmers and traders. Last February, according to NPR, China’s legislature started taking steps to “uproot the pernicious habit of eating wild animals.” At the behest of President Xi Jinping, they have already banned the hunting, trade, and consumption of a large number of “terrestrial wild animals,” a step never fully implemented after the original SARS outbreak. According to a report in Nature, the Chinese government has already closed 12,000 businesses, purged a million websites with information about wildlife trading, and banned the farming of bamboo rats and civets, among other species.

Then there is the chance covid-19 is the result of a laboratory accident. If that’s true, it would bring the sharpest consequences, especially for scientists like those in charge of finding the virus’s origin. If the pandemic was caused by ambitious, high-tech research on dangerous germs, it would mean China’s fast rise as a biotech powerhouse is a threat to the globe. It would mean this type of science should be severely restricted, or even banned, in China and everywhere else. More than any other hypothesis, a government-sponsored technology program run amok—along with early efforts to conceal news of the outbreak—would establish a case for retribution. “If this is a man-made catastrophe,” says Miles Yu, an analyst with the conservative Hudson Institute, “I think the world should seek reparations.”

According to some former virus chasers, what’s actually in the WHO-China origins report may be different from what we’ve heard so far. Schnur says the Chinese probably already know much more than we think, so the role of the team could be to find ways to push those facts into the light. It is a process he calls “part diplomacy and part epidemiology.” He believes China’s investigation was likely very thorough and that the foreign visitors may also have stronger views than they have let on so far.

As he points out, “What you say in a press conference may be different than what you put in a report once you have left the country.”


Decarbonizing industries with connectivity and 5G



Decarbonizing industries with connectivity and 5G

The United Nations Intergovernmental Panel on Climate Change’s sixth climate change report—an aggregated assessment of scientific research prepared by some 300 scientists across 66 countries—has served as the loudest and clearest wake-up call to date on the global warming crisis. The panel unequivocally attributes the increase in the earth’s temperature—it has risen by 1.1 °C since the Industrial Revolution—to human activity. Without substantial and immediate reductions in carbon dioxide and other greenhouse gas emissions, temperatures will rise between 1.5 °C and 2 °C before the end of the century. That, the panel posits, will lead all of humanity to a “greater risk of passing through ‘tipping points,’ thresholds beyond which certain impacts can no longer be avoided even if temperatures are brought back down later on.”

Corporations and industries must therefore redouble their greenhouse gas emissions reduction and removal efforts with speed and precision—but to do this, they must also commit to deep operational and organizational transformation. Cellular infrastructure, particularly 5G, is one of the many digital tools and technology-enabled processes organizations have at their disposal to accelerate decarbonization efforts.  

5G and other cellular technology can enable increasingly interconnected supply chains and networks, improve data sharing, optimize systems, and increase operational efficiency. These capabilities could soon contribute to an exponential acceleration of global efforts to reduce carbon emissions.

Industries such as energy, manufacturing, and transportation could have the biggest impact on decarbonization efforts through the use of 5G, as they are some of the biggest greenhouse-gas-emitting industries, and all rely on connectivity to link to one another through communications network infrastructure.

The higher performance and improved efficiency of 5G—which delivers higher multi-gigabit peak data speeds, ultra-low latency, increased reliability, and increased network capacity—could help businesses and public infrastructure providers focus on business transformation and reduction of harmful emissions. This requires effective digital management and monitoring of distributed operations with resilience and analytic insight. 5G will help factories, logistics networks, power companies, and others operate more efficiently, more consciously, and more purposely in line with their explicit sustainability objectives through better insight and more powerful network configurations.

This report, “Decarbonizing industries with connectivity & 5G,” argues that the capabilities enabled by broadband cellular connectivity primarily, though not exclusively, through 5G network infrastructure are a unique, powerful, and immediate enabler of carbon reduction efforts. They have the potential to create a transformational acceleration of decarbonization efforts, as increasingly interconnected supply chains, transportation, and energy networks share data to increase efficiency and productivity, hence optimizing systems for lower carbon emissions.

Explore more.

Continue Reading


Surgeons have successfully tested a pig’s kidney in a human patient



Surgeons have successfully tested a pig’s kidney in a human patient

The reception: The research was conducted last month and is yet to be peer reviewed or published in a journal, but external experts say it represents a major advance. “There is no doubt that this is a highly significant breakthrough,” says Darren K. Griffin, a professor of genetics at the University of Kent, UK. “The research team were cautious, using a patient who had suffered brain death, attaching the kidney to the outside of the body, and closely monitoring for only a limited amount of time. There is thus a long way to go and much to discover,” he added. 

“This is a huge breakthrough. It’s a big, big deal,” Dorry Segev, a professor of transplant surgery at Johns Hopkins School of Medicine who was not involved in the research, told the New York Times. However, he added, “we need to know more about the longevity of the organ.”

The background: In recent years, research has increasingly zeroed in on pigs as the most promising avenue to help address the shortage of organs for transplant, but it has faced a number of obstacles, most prominently the fact that a sugar in pig cells triggers an aggressive rejection response in humans.

The researchers got around this by genetically altering the donor pig to knock out the gene encoding the sugar molecule that causes the rejection response. The pig was genetically engineered by Revivicor, one of several biotech companies working to develop pig organs to transplant into humans. 

The big prize: There is a dire need for more kidneys. More than 100,000 people in the US are currently waiting for a kidney transplant, and 13 die of them every day, according to the National Kidney Foundation. Genetically engineered pigs could offer a crucial lifeline for these people, if the approach tested at NYU Langone can work for much longer periods.

Continue Reading


Getting value from your data shouldn’t be this hard



Getting value from your data shouldn’t be this hard

The potential impact of the ongoing worldwide data explosion continues to excite the imagination. A 2018 report estimated that every second of every day, every person produces 1.7 MB of data on average—and annual data creation has more than doubled since then and is projected to more than double again by 2025. A report from McKinsey Global Institute estimates that skillful uses of big data could generate an additional $3 trillion in economic activity, enabling applications as diverse as self-driving cars, personalized health care, and traceable food supply chains.

But adding all this data to the system is also creating confusion about how to find it, use it, manage it, and legally, securely, and efficiently share it. Where did a certain dataset come from? Who owns what? Who’s allowed to see certain things? Where does it reside? Can it be shared? Can it be sold? Can people see how it was used?

As data’s applications grow and become more ubiquitous, producers, consumers, and owners and stewards of data are finding that they don’t have a playbook to follow. Consumers want to connect to data they trust so they can make the best possible decisions. Producers need tools to share their data safely with those who need it. But technology platforms fall short, and there are no real common sources of truth to connect both sides.

How do we find data? When should we move it?

In a perfect world, data would flow freely like a utility accessible to all. It could be packaged up and sold like raw materials. It could be viewed easily, without complications, by anyone authorized to see it. Its origins and movements could be tracked, removing any concerns about nefarious uses somewhere along the line.

Today’s world, of course, does not operate this way. The massive data explosion has created a long list of issues and opportunities that make it tricky to share chunks of information.

With data being created nearly everywhere within and outside of an organization, the first challenge is identifying what is being gathered and how to organize it so it can be found.

A lack of transparency and sovereignty over stored and processed data and infrastructure opens up trust issues. Today, moving data to centralized locations from multiple technology stacks is expensive and inefficient. The absence of open metadata standards and widely accessible application programming interfaces can make it hard to access and consume data. The presence of sector-specific data ontologies can make it hard for people outside the sector to benefit from new sources of data. Multiple stakeholders and difficulty accessing existing data services can make it hard to share without a governance model.

Europe is taking the lead

Despite the issues, data-sharing projects are being undertaken on a grand scale. One that’s backed by the European Union and a nonprofit group is creating an interoperable data exchange called Gaia-X, where businesses can share data under the protection of strict European data privacy laws. The exchange is envisioned as a vessel to share data across industries and a repository for information about data services around artificial intelligence (AI), analytics, and the internet of things.

Hewlett Packard Enterprise recently announced a solution framework to support companies, service providers, and public organizations’ participation in Gaia-X. The dataspaces platform, which is currently in development and based on open standards and cloud native, democratizes access to data, data analytics, and AI by making them more accessible to domain experts and common users. It provides a place where experts from domain areas can more easily identify trustworthy datasets and securely perform analytics on operational data—without always requiring the costly movement of data to centralized locations.

By using this framework to integrate complex data sources across IT landscapes, enterprises will be able to provide data transparency at scale, so everyone—whether a data scientist or not—knows what data they have, how to access it, and how to use it in real time.

Data-sharing initiatives are also on the top of enterprises’ agendas. One important priority enterprises face is the vetting of data that’s being used to train internal AI and machine learning models. AI and machine learning are already being used widely in enterprises and industry to drive ongoing improvements in everything from product development to recruiting to manufacturing. And we’re just getting started. IDC projects the global AI market will grow from $328 billion in 2021 to $554 billion in 2025.

To unlock AI’s true potential, governments and enterprises need to better understand the collective legacy of all the data that is driving these models. How do AI models make their decisions? Do they have bias? Are they trustworthy? Have untrustworthy individuals been able to access or change the data that an enterprise has trained its model against? Connecting data producers to data consumers more transparently and with greater efficiency can help answer some of these questions.

Building data maturity

Enterprises aren’t going to solve how to unlock all of their data overnight. But they can prepare themselves to take advantage of technologies and management concepts that help to create a data-sharing mentality. They can ensure that they’re developing the maturity to consume or share data strategically and effectively rather than doing it on an ad hoc basis.

Data producers can prepare for wider distribution of data by taking a series of steps. They need to understand where their data is and understand how they’re collecting it. Then, they need to make sure the people who consume the data have the ability to access the right sets of data at the right times. That’s the starting point.

Then comes the harder part. If a data producer has consumers—which can be inside or outside the organization—they have to connect to the data. That’s both an organizational and a technology challenge. Many organizations want governance over data sharing with other organizations. The democratization of data—at least being able to find it across organizations—is an organizational maturity issue. How do they handle that?

Companies that contribute to the auto industry actively share data with vendors, partners, and subcontractors. It takes a lot of parts—and a lot of coordination—to assemble a car. Partners readily share information on everything from engines to tires to web-enabled repair channels. Automotive dataspaces can serve upwards of 10,000 vendors. But in other industries, it might be more insular. Some large companies might not want to share sensitive information even within their own network of business units.

Creating a data mentality

Companies on either side of the consumer-producer continuum can advance their data-sharing mentality by asking themselves these strategic questions:

  • If enterprises are building AI and machine learning solutions, where are the teams getting their data? How are they connecting to that data? And how do they track that history to ensure trustworthiness and provenance of data?
  • If data has value to others, what is the monetization path the team is taking today to expand on that value, and how will it be governed?
  • If a company is already exchanging or monetizing data, can it authorize a broader set of services on multiple platforms—on premises and in the cloud?
  • For organizations that need to share data with vendors, how is the coordination of those vendors to the same datasets and updates getting done today?
  • Do producers want to replicate their data or force people to bring models to them? Datasets might be so large that they can’t be replicated. Should a company host software developers on its platform where its data is and move the models in and out?
  • How can workers in a department that consumes data influence the practices of the upstream data producers within their organization?

Taking action

The data revolution is creating business opportunities—along with plenty of confusion about how to search for, collect, manage, and gain insights from that data in a strategic way. Data producers and data consumers are becoming more disconnected with each other. HPE is building a platform supporting both on-premises and public cloud, using open source as the foundation and solutions like HPE Ezmeral Software Platform to provide the common ground both sides need to make the data revolution work for them.

Read the original article on Enterprise.nxt.

This content was produced by Hewlett Packard Enterprise. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Copyright © 2020 Diliput News.