Connect with us

Tech

All together now: the most trustworthy covid-19 model is an ensemble

Published

on

All together now: the most trustworthy covid-19 model is an ensemble


Every week, teams each submit not only a point forecast predicting a single number outcome (say, that in one week there will be 500 deaths). They also submit probabilistic predictions that quantify the uncertainty by estimating the likelihood of the number of cases or deaths at intervals, or ranges, that get narrower and narrower, targeting a central forecast. For instance, a model might predict that there’s a 90 percent probability of seeing 100 to 500 deaths, a 50 percent probability of seeing 300 to 400, and 10 percent probability of seeing 350 to 360.

“It’s like a bull’s eye, getting more and more focused,” says Reich.

Funk adds: “The sharper you define the target, the less likely you are to hit it.” It’s fine balance, since an arbitrarily wide forecast will be correct, and also useless. “It should be as precise as possible,” says Funk, “while also giving the correct answer.”

In collating and evaluating all the individual models, the ensemble tries to optimize their information and mitigate their shortcomings. The result is a probabilistic prediction, statistical average, or a “median forecast.” It’s a consensus, essentially, with a more finely calibrated, and hence more realistic, expression of the uncertainty. All the various elements of uncertainty average out in the wash.

The study by Reich’s lab, which focused on projected deaths and evaluated about 200,000 forecasts from mid-May to late-December 2020 (an updated analysis with predictions for four more months will soon be added), found that the performance of individual models was highly variable. One week a model might be accurate, the next week it might be way off. But, as the authors wrote, “In combining the forecasts from all teams, the ensemble showed the best overall probabilistic accuracy.”

And these ensemble exercises serve not only to improve predictions, but also people’s trust in the models, says Ashleigh Tuite, an epidemiologist at the Dalla Lana School of Public Health at the University of Toronto. “One of the lessons of ensemble modeling is that none of the models is perfect,” Tuite says. “And even the ensemble sometimes will miss something important. Models in general have a hard time forecasting inflection points—peaks, or if things suddenly start accelerating or decelerating.”

“Models are not oracles.”

Alessandro Vespignani

The use of ensemble modeling is not unique to the pandemic. In fact, we consume probabilistic ensemble forecasts every day when Googling the weather and taking note that there’s 90 percent chance of precipitation. It’s the gold standard for both weather and climate predictions.

“It’s been a real success story and the way to go for about three decades,” says Tilmann Gneiting, a computational statistician at the Heidelberg Institute for Theoretical Studies and the Karlsruhe Institute of Technology in Germany. Prior to ensembles, weather forecasting used a single numerical model, which produced, in raw form, a deterministic weather forecast that was “ridiculously overconfident and wildly unreliable,” says Gneiting (weather forecasters, aware of this problem, subjected the raw results to subsequent statistical analysis that produced reasonably reliable probability of precipitation forecasts by the 1960s).

Gneiting notes, however, that the analogy between infectious disease and weather forecasting has its limitations. For one thing, the probability of precipitation doesn’t change in response to human behavior—it’ll rain, umbrella or no umbrella—whereas the trajectory of the pandemic responds to our preventative measures.

Forecasting during a pandemic is a system subject to a feedback loop. “Models are not oracles,” says Alessandro Vespignani, a computational epidemiologist at Northeastern University and ensemble hub contributor, who studies complex networks and infectious disease spread with a focus on the “techno-social” systems that drive feedback mechanisms. “Any model is providing an answer that is conditional on certain assumptions.”

When people process a model’s prediction, their subsequent behavioral changes upend the assumptions, change the disease dynamics and render the forecast inaccurate. In this way, modeling can be a “self-destroying prophecy.”

And there are other factors that could compound the uncertainty: seasonality, variants, vaccine availability or uptake; and policy changes like the swift decision from the CDC about unmasking. “These all amount to huge unknowns that, if you actually wanted to capture the uncertainty of the future, would really limit what you could say,” says Justin Lessler, an epidemiologist at the Johns Hopkins Bloomberg School of Public Health, and a contributor to the COVID-19 Forecast Hub.

The ensemble study of death forecasts observed that accuracy decays, and uncertainty grows, as models make predictions farther into the future—there was about two times the error looking four weeks ahead versus one week (four weeks is considered the limit for meaningful short-term forecasts; at the 20-week time horizon there was about five times the error).

“It’s fair to debate when things worked and when things didn’t.”

Johannes Bracher

But assessing the quality of the models—warts and all—is an important secondary goal of forecasting hubs. And it’s easy enough to do, since short-term predictions are quickly confronted with the reality of the numbers tallied day-to-day, as a measure of their success.

Most researchers are careful to differentiate between this type of “forecast model,” aiming to make explicit and verifiable predictions about the future, which is only possible in the short- term; versus a “scenario model,” exploring “what if” hypotheticals, possible plotlines that might develop in the medium- or long-term future (since scenario models are not meant to be predictions, they shouldn’t be evaluated retrospectively against reality).

During the pandemic, a critical spotlight has often been directed at models with predictions that were spectacularly wrong. “While longer-term what-if projections are difficult to evaluate, we shouldn’t shy away from comparing short-term predictions with reality,” says Johannes Bracher, a biostatistician at the Heidelberg Institute for Theoretical Studies and the Karlsruhe Institute of Technology, who coordinates a German and Polish hub, and advises the European hub. “It’s fair to debate when things worked and when things didn’t,” he says. But an informed debate requires recognizing and considering the limits and intentions of models (sometimes the fiercest critics were those who mistook scenario models for forecast models).

“The big question is, can we improve?”

Nicholas Reich

Similarly, when predictions in any given situation prove particularly intractable, modelers should say so. “If we have learned one thing, it’s that cases are extremely difficult to model even in the short run,” says Bracher. “Deaths are a more lagged indicator and are easier to predict.”

In April, some of the European models were overly pessimistic and missed a sudden decrease in cases. A public debate ensued about the accuracy and reliability of pandemic models. Weighing in on Twitter, Bracher asked: “Is it surprising that the models are (not infrequently) wrong? After a 1-year pandemic, I would say: no.”  This makes it all the more important, he says, that models indicate their level of certainty or uncertainty, that they take a realistic stance about how unpredictable cases are, and about the future course. “Modelers need to communicate the uncertainty, but it shouldn’t be seen as a failure,” Bracher says.

Trusting some models more than others

As an oft-quoted statistical aphorism goes, “All models are wrong, but some are useful.” But as Bracher notes, “If you do the ensemble model approach, in a sense you are saying that all models are useful, that each model has something to contribute”—though some models may be more informative or reliable than others.

Observing this fluctuation prompted Reich and others to try “training” the ensemble model—that is, as Reich explains, “building algorithms that teach the ensemble to ‘trust’ some models more than others and learn which precise combination of models works in harmony together.” Bracher’s team now contributes a mini-ensemble, built from only the models that have performed consistently well in the past, amplifying the clearest signal.

“The big question is, can we improve?” Reich says. “The original method is so simple. It seems like there has to be a way of improving on just taking a simple average of all these models.” So far, however, it is proving harder than expected—small improvements seem feasible, but dramatic improvements may be close to impossible.

A complementary tool for improving our overall perspective on the pandemic beyond week-to-week glimpses is to look further out on the time horizon, four to six months, with those “scenario modeling.” Last December, motivated by the surge in cases and the imminent availability of the vaccine, Lessler and collaborators launched the COVID-19 Scenario Modeling Hub, in consultation with the CDC.

Tech

The best vaccine incentive might be paid time off

Published

on

The best vaccine incentive might be paid time off


But state laws are a piecemeal approach, and workers’ protections or benefits largely depend on what employers will give. Ifeoma Ajunwa, an associate professor of law at the University of North Carolina at Chapel Hill, says employers operate as their own private governments, with free rein over how they run their business. Covid exposed “the limited power that the government can exert over employers,” says Ajunwa. “The pandemic really laid that bare, especially when it came to covid-19 precautions or covid-19 procedures for operation.”

That means it’s largely up to workers to research and understand their rights.

“If you’re part of the 94% of private sector workers who are not in a union, you may not know that a benefit exists,” says Justin Feldman, an epidemiologist at Harvard who has written about covid-19 and the workplace. “And even if you do know that exists, it doesn’t mean you’re going to be able to exercise it without retaliation.”

In a statement, the New York Department of Labor told me it has received “various complaints” about violation of the covid-19 vaccination leave law and says that it “attempts to collect unpaid wages, or restitution for those who were not paid for the time off as required.” 

But even laws that appear, on paper, to support workers could neglect those in the most precarious jobs. The New York Department of Labor has said any worker denied vaccination leave should file a complaint but declined to say specifically if so-called gig workers are covered. (Ajunwa at Chapel Hill says that because the law uses the word “employee,” it would not cover gig workers, who also don’t get health insurance through work.) 

“A national emergency”

Public health experts stress that there isn’t just one foolproof tactic for getting people vaccinated. The government could create a series of paid days off for workers in different sectors to get shots, but we’d still need to combine that with other public health strategies like going door to door, Feldman says. 

Misconceptions about covid-19 need tackling, too: younger workers may believe they’re not susceptible to severe effects of the disease, Feldman notes, especially if they’ve already worked in person with minimum precautions throughout the pandemic and haven’t gotten sick. It may be particularly hard to change their minds after hearing peers, media, or commentators downplaying the risk.

“We need to treat getting people vaccinated as a national emergency, and that means not treating it like an individual failing,” he says. “We need to do a lot of different things at the same time and see what works.”

“Once folks have the information they need, based on the science, it makes other carrots more like the icing on the cake.”

Rhea Boyd, founder of The Conversation

Rhea Boyd, a pediatrician in the San Francisco Bay Area, says that people need more information before they can be persuaded by incentives. She founded The Conversation, in which Black and Latino health-care workers deliver credible information about covid-19 vaccines to their communities. 

“A major incentive is personal self-interest,” Boyd said in an email. “Once folks have the information they need, based on the science, it makes other ‘carrots’ more like the icing on the cake.”

What would that look like?

“We will only know what is enough once everyone is vaccinated,” she says.

In the meantime, frontline workers’ level of protection on the job continues to rely on shifting public health recommendations, their employers’ own policies, and the whims of customers who can choose to abide by safety measures—or not.

And although public health officials have taken vaccine clinics to public parks, churches, and Juneteenth celebrations in an attempt to change minds, workers are watching what their bosses say and do.

“Workers of every stripe take cues for what they should be doing from their employers,” Ajunwa says. “I think this points to an oversize influence that employers have on employees’ lives in America.”

This story is part of the Pandemic Technology Project, supported by The Rockefeller Foundation.

Continue Reading

Tech

Astronomers have spotted x-rays from behind a supermassive black hole

Published

on

Astronomers have spotted x-rays from behind a supermassive black hole


“This is a really exciting result,” says Edward Cackett, an astronomer at Wayne State University who was not involved with the study. “Although we have seen the signature of x-ray echoes before, until now it has not been possible to separate out the echo that comes from behind the black hole and gets bent around into our line of sight. It will allow for better mapping of how things fall into black holes and how black holes bend the space time around them.”

The release of energy by black holes, sometimes in the form of x-rays, is an absurdly extreme process. And because supermassive black holes release so much energy, they are essentially powerhouses that allow galaxies to grow around them. “If you want to understand how galaxies form, you really need to understand these processes outside the black hole that are able to release these enormous amounts of energy and power, these amazingly bright light sources that we’re studying,” says Dan Wilkins, an astrophysicist at Stanford University and the lead author of the study. 

The study focuses on a supermassive black hole at the center of a galaxy called I Zwicky 1 (I Zw 1 for short), around 100 million light-years from Earth. In supermassive black holes like I Zw 1’s, large amounts of gas fall toward the center (the event horizon, which is basically the point of no return) and tend to flatten out into a disk. Above the black hole, a confluence of supercharged particles and magnetic field activity results in the production of high-energy x-rays.

Some of these x-rays are shining straight at us, and we can observe them normally, using telescopes. But some of them also shine down toward the flat disk of gas and will reflect off it. I Zw 1 black hole’s rotation is slowing down at a higher rate than that seen in most supermassive black holes, which causes surrounding gas and dust to fall in more easily  and feed the black hole from multiple directions. This, in turn, leads to greater x-ray emissions, which is why Wilkins and his team were especially interested.

While Wilkins and his team were observing this black hole, they noticed that the corona appeared to be “flashing.” These flashes, caused by x-ray pulses reflecting off the massive disk of gas, were coming from behind the black hole’s shadow—a place that is normally hidden from view. But because the black hole bends the space around it, the x-ray reflections are also bent around it, which means we can spot them.

The signals were found using two different space-based telescopes optimized to detect x-rays in space: NuSTAR, which is run by NASA, and XMM-Newton, which is run by the European Space Agency.

The biggest implication of the new findings is that they confirm what Albert Einstein predicted in 1963 as part of his theory of general relativity—the way light ought to bend around gargantuan objects like supermassive black holes. 

“It’s the first time we really see the direct signature of the way light bends all the way behind the black hole into our line of sight, because of the way black hole warps space around itself,” says Wilkins. 

“While this observation doesn’t change our general picture of black hole accretion, it is a nice confirmation that general relativity is at play in these systems,” says Erin Kara, an astrophysicist at MIT who was not involved with the study.

Despite the name, supermassive black holes are so far away that they really just look like single points of light, even with state-of-the-art instruments. It’s not going to be possible to take images of all of them the way scientists used the Event Horizon Telescope to capture the shadow of a supermassive black hole in galaxy M87. 

So although it’s early, Wilkins and his team are hopeful that detecting and studying more of these x-ray echoes from behind the bend could help us create partial or even full pictures of distant supermassive black holes. In turn, that could help them unlock some big mysteries around how supermassive black holes grow, sustain entire galaxies, and create environments where the laws of physics are pushed to the limit.  

Continue Reading

Tech

The pandemic slashed the West Coast’s emissions. Wildfires already reversed it.

Published

on

Kings Canyon National Park after a forest fire


That’s far above normal levels for this part of the year and comes on top of the surge of emissions from the massive fires across the American West in 2020. California fires alone produced more than 100 million tons of carbon dioxide last year, which was already enough to more than cancel out the broader region’s annual emissions declines.

“The steady but slow reductions in [greenhouse gases] pale in comparison to those from wildfire,” says Oriana Chegwidden, a climate scientist at CarbonPlan.

Massive wildfires burning across millions of acres in Siberia are also clogging the skies across eastern Russia and releasing tens of millions of tons of emissions, Copernicus reported earlier this month.

Fires and forest emissions are only expected to increase across many regions of the world as climate change accelerates in the coming decades, creating the hot and often dry conditions that turn trees and plants into tinder.

Fire risk—defined as the chance that an area will experience a moderate- to high-severity fire in any given year—could quadruple across the US by 2090, even under scenarios where emissions decline significantly in the coming decades, according to a recent study by researchers at the University of Utah and CarbonPlan. With unchecked emissions, US fire risk could be 14 times higher near the end of the century.

Emissions from fires are “already bad and only going to get worse,” says Chegwidden, one of the study’s lead authors.

“Very ominous”

Over longer periods, the emissions and climate impacts of increasing wildfires will depend on how rapidly forests grow back and draw carbon back down—or whether they do at all. That, in turn, depends on the dominant trees, the severity of the fires, and how much local climate conditions have changed since that forest took root.

While working toward her doctorate in the early 2010s, Camille Stevens-Rumann spent summer and spring months trekking through alpine forests in Idaho’s Frank Church–River of No Return Wilderness, studying the aftermath of fires.

She noted where and when conifer forests began to return, where they didn’t, and where opportunistic invasive species like cheatgrass took over the landscape.

In a 2018 study in Ecology Letters, she and her coauthors concluded that trees that burned down across the Rocky Mountains have had far more trouble growing back this century, as the region has grown hotter and drier, than during the end of the last one. Dry conifer forests that had already teetered on the edge of survivable conditions were far more likely to simply convert to grass and shrublands, which generally absorb and store much less carbon.

This can be healthy up to a point, creating fire breaks that reduce the damage of future fires, says Stevens-Rumann, an assistant professor of forest and rangeland stewardship at Colorado State University. It can also help to make up a bit for the US’s history of aggressively putting out fires, which has allowed fuel to build up in many forests, also increasing the odds of major blazes when they do ignite.

But their findings are “very ominous” given the massive fires we’re already seeing and the projections for increasingly hot, dry conditions across the American West, she says.

Other studies have noted that these pressures could begin to fundamentally transform western US forests in the coming decades, damaging or destroying sources of biodiversity, water, wildlife habitat, and carbon storage.

Fires, droughts, insect infestations, and shifting climate conditions will convert major parts of California’s forests into shrublands, according to a modeling study published in AGU Advances last week. Tree losses could be particularly steep in the dense Douglas fir and coastal redwood forests along the Northern California coast and in the foothills of the Sierra Nevada range.

Kings Canyon National Park, in California’s Sierra Nevada range, following a recent forest fire.

GETTY

All told, the state will lose around 9% of the carbon stored in trees and plants aboveground by the end of this century under a scenario in which we stabilize emissions this century, and more than 16% in a future world where they continue to rise.

Among other impacts, that will clearly complicate the state’s reliance on its lands to capture and store carbon through its forestry offsets program and other climate efforts, the study notes. California is striving to become carbon neutral by 2045. 

Meanwhile, medium- to high-emissions scenarios create “a real likelihood of Yellowstone’s forests being converted to non-forest vegetation during the mid-21st century,” because increasingly common and large fires would make it more and more difficult for trees to grow back, a 2011 study in Proceedings of the National Academy of Sciences concluded.

The global picture

The net effect of climate change on fires, and fires on climate change, is much more complicated globally.

Fires contribute directly to climate change by releasing emissions from trees as well as the rich carbon stored in soils and peatlands. They can also produce black carbon that may eventually settle on glaciers and ice sheets, where it absorbs heat. That accelerates the loss of ice and the rise of ocean levels.

But fires can drive negative climate feedback as well. The smoke from Western wildfires that reached the East Coast in recent days, while terrible for human health, carries aerosols that reflect some level of heat back into space. Similarly, fires in boreal forests in Canada, Alaska, and Russia can open up space for snow that’s far more reflective than the forests they replaced, offsetting the heating effect of the emissions released.

Different parts of the globe are also pushing and pulling in different ways.

Climate change is making wildfires worse in most forested areas of the globe, says James Randerson, a professor of earth system science at the University of California, Irvine, and a coauthor of the AGU paper.

But the total area burned by fires worldwide is actually going down, primarily thanks to decreases across the savannas and grasslands of the tropics. Among other factors, sprawling farms and roads are fragmenting the landscape in developing parts of Africa, Asia, and South America, acting as breaks for these fires. Meanwhile, growing herds of livestock are gobbling up fuels.

Overall, global emissions from fires stand at about a fifth the levels from fossil fuels, though they’re not rising sharply as yet. But total emissions from forests have clearly been climbing when you include fires, deforestation and logging. They’ve grown from less than 5 billion tons in 2001 to more than 10 billion in 2019, according to a Nature Climate Change paper in January.

Less fuel to burn

As warming continues in the decades ahead, climate change itself will affect different areas in different ways. While many regions will become hotter, drier, and more susceptible to wildfires, some cooler parts of the globe will become more hospitable to forest growth, like the high reaches of tall mountains and parts of the Arctic tundra, Randerson says.

Global warming could also reach a point where it actually starts to reduce certain risks as well. If Yellowstone, California’s Sierra Nevada, and other areas lose big portions of their forests, as studies have suggested, fires in those areas could begin to tick back down toward the end of the century. That’s because there’ll simply be less, or less flammable, fuel to burn.

Worldwide fire levels in the future will ultimately depend both on the rate of climate change as well as human activity, which is the main source of ignitions, says Doug Morton, chief of the biospheric sciences laboratory at NASA’s Goddard Space Flight Center.

Continue Reading

Copyright © 2020 Diliput News.