Connect with us

Tech

Why mixing vaccines could help boost immunity

Published

on

Why mixing vaccines could help boost immunity


We should soon have a better idea. A handful of trials are now under way to test the power of vaccine combinations, with the first results due in later this month. If these mixed regimens prove safe and effective, countries will be able to keep the vaccine rollout moving even if supplies of one vaccine dwindle because of manufacturing delays, unforeseen shortages, or safety concerns. 

But there’s another, more exciting prospect that could be a vital part of our strategy in the future: mixing vaccines might lead to broader immunity and hamper the virus’s attempts to evade our immune systems. Eventually, a mix-and-match approach might be the best way to protect ourselves.

Mixing on trial 

The covid-19 vaccines currently in use protect against the virus in slightly different ways. Most target the coronavirus’s spike protein, which it uses to gain entry to our cells. But some deliver the instructions for making the protein in the form of messenger RNA (Pfizer, Moderna). Some deliver the spike protein itself (Novavax). Some use another harmless virus to ferry in the instructions for making it, like a Trojan horse (Johnson & Johnson, Oxford-AstraZeneca, Sputnik V). Some offer up whole inactivated virus (Sinopharm, Sinovac). 

In a study published in March, researchers from the National Institutes for Food and Drug Control in China tested combinations of four different covid-19 vaccines in mice, and found that some did improve immune response. When they first gave the rodents a vaccine that relies on a harmless cold virus to smuggle in the instructions and then a second dose of a different type of vaccine, they saw higher antibody levels and a better T-cell response. But when they reversed the order, giving the viral vaccine second, they did not see an improvement. 

Why combining shots might improve efficacy is a bit of a mystery, says Shan Lu, a physician and vaccine researcher at the University of Massachusetts Medical School who pioneered this mixing strategy. “The mechanism we can explain partially, but we don’t fully understand.” Different vaccines present the same information in slightly different ways. Those differences might awaken different parts of the immune system or sharpen the immune response. This strategy might also make immunity last longer.  

Whether those results translate to humans remains to be seen. Researchers at Oxford University have launched a human trial to test just how mixing might work. The study, called Com-CoV, offers participants a first shot of Pfizer or Oxford-AstraZeneca. For their second dose, they will either get the same vaccine or a shot of Moderna or Novavax. The first results should be available in the coming weeks. 

Other studies are under way as well. In Spain, where Oxford-AstraZeneca is now being given only to people over 60, researchers plan to recruit 600 people to test whether a first dose of the shot can be paired with a second dose from Pfizer. According to reporting in El País, about a million people received the first dose of the vaccine but aren’t old enough to receive the second dose. Health officials are waiting for the results of this study before issuing recommendations for this group, but it’s not clear whether any participants have yet been recruited. 

Late last year Oxford-AstraZeneca announced that it would partner with Russia’s Gamaleya Institute, which developed Sputnik V vaccine, to test how the two shots work in combination. The trial was supposed to launch in March and provide interim results in May, but it’s not clear whether it has actually begun. And Chinese officials have hinted that they’ll explore mixing vaccines to boost the efficacy of their shots. 

The biggest gains might come from mixing vaccines that have lower efficacies. The mRNA vaccines from Pfizer and Moderna provide excellent protection. “I don’t think there’s reason to mess with that,” says Donna Farber, an immunologist at Columbia University. But mixing might improve protection for some of the vaccines that have reported lower levels of protection, like Oxford-AstraZeneca and Johnson & Johnson, as well as some of the Chinese vaccines. Many of these vaccines work quite well, but mixing might help them work even better. 

Tech

Transforming health care at the edge

Published

on

Transforming health care at the edge


Edge computing, through on-site sensors and devices, as well as last-mile edge equipment that connects to those devices, allows data processing and analysis to happen close to the digital interaction. Rather than using centralized cloud or on-premises infrastructure, these distributed tools at the edge offer the same quality of data processing but without latency issues or massive bandwidth use.

“The real-time feedback loop required for things like remote monitoring of a patient’s heart and respiratory metrics is only possible with something like edge computing,” Mirchandani says. “If all that information took several seconds or a minute to get processed somewhere else, it’s useless.”

Opportunities and challenges at the health-care edge

The sky’s the limit when it comes to the opportunities to use edge computing in health care, says Paul Savill, senior vice president of product management and services at technology company Lumen, especially as health systems work to reduce costs by shifting testing and treatment out of hospitals and into clinics, retail locations, and homes.

“A lot of patient care now happens at retail drugstores, whether it is blood work, scans, or other assessments,” Savill says. “With edge computing capabilities and tools, that can now take place on-site, on a real-time basis, so you don’t have to send things to a lab and wait a day or week to get results back.”

The arrival of 5G technology, the new standard for broadband cellular networks, will also drive opportunities, as it works with edge computing tools to support the internet of things and machine learning, adds Mirchandani. “It’s the combination of this super-low-latency network and computing at the edge that will help these powerful new applications take flight,” he says. Take robotic surgeries—it’s crucial for the surgeon to have nearly instant, sub-millisecond sensory feedback. “That’s not possible in any other way than through technologies such as edge computing and 5G,” he says.

“A lot of patient care now happens at retail drugstores. With edge computing capabilities and tools, that can now take place on-site, on a real-time basis, so you don’t have to send things to a lab and wait a day or week to get results back.”

Paul Savill, Senior Vice President, Product Management and Services, Lumen

Data security, however, is a particular challenge for any health-care-related technology because of HIPAA, the US health information privacy law, and other regulations. The real-time data transmission edge computing provides will be under significant scrutiny, Mirchandani explains, which may affect widespread adoption. “There needs to be an almost 100% guarantee that the information you generate from a heart monitor, pulse oximeter, blood glucose monitor, or any other device will not be intercepted or disrupted in any way,” he says.

Still, edge computing technologies, paired with the right security standards and tools, are often more secure and reliable than the on-premises environment a business could implement on its own, Savill points out. “It’s about understanding the entire threat landscape down to the network level.”

Continue Reading

Tech

Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof

Published

on

Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof


Smith’s Yelp reviews were shut down after the sudden flurry of activity on its page, which the company labels “unusual activity alerts,” a stopgap measure for both the business and Yelp to filter through a flood of reviews and pick out which are spam and which aren’t. Noorie Malik, Yelp’s vice president of user operations, said Yelp has a “team of moderators” that investigate pages that get an unusual amount of traffic. “After we’ve seen activity dramatically decrease or stop, we will then clean up the page so that only firsthand consumer experiences are reflected,” she said in a statement.

It’s a practice that Yelp has had to deploy more often over the course of the pandemic: According to Yelp’s 2020 Trust & Safety Report, the company saw a 206% increase over 2019 levels in unusual activity alerts. “Since January 2021, we’ve placed more than 15 unusual activity alerts on business pages related to a business’s stance on covid-19 vaccinations,” said Malik.

The majority of those cases have been since May, like the gay bar C.C. Attles in Seattle, which got an alert from Yelp after it made patrons show proof of vaccination at the door. Earlier this month, Moe’s Cantina in Chicago’s River North neighborhood got spammed after it attempted to isolate vaccinated customers from unvaccinated ones.

Spamming a business with one-star reviews is not a new tactic. In fact, perhaps the best-known case is Colorado’s Masterpiece bakery, which won a 2018 Supreme Court battle for refusing to make a wedding cake for a same-sex couple, after which it got pummeled by one-star reviews. “People are still writing fake reviews. People will always write fake reviews,” Liu says.

But he adds that today’s online audience know that platforms use algorithms to detect and flag problematic words, so bad actors can mask their grievances by blaming poor restaurant service like a more typical negative review to ensure the rating stays up — and counts.

That seems to have been the case with Knapp’s bar. His Yelp review included comments like “There was hair in my food” or alleged cockroach sightings. “Really ridiculous, fantastic shit,” Knapp says. “If you looked at previous reviews, you would understand immediately that this doesn’t make sense.” 

Liu also says there is a limit to how much Yelp can improve their spam detection, since natural language — or the way we speak, read, and write — “is very tough for computer systems to detect.” 

But Liu doesn’t think putting a human being in charge of figuring out which reviews are spam or not will solve the problem. “Human beings can’t do it,” he says. “Some people might get it right, some people might get it wrong. I have fake reviews on my webpage and even I can’t tell which are real or not.”

You might notice that I’ve only mentioned Yelp reviews thus far, despite the fact that Google reviews — which appear in the business description box on the right side of the Google search results page under “reviews” — is arguably more influential. That’s because Google’s review operations are, frankly, even more mysterious. 

While businesses I spoke to said Yelp worked with them on identifying spam reviews, none of them had any luck with contacting Google’s team. “You would think Google would say, ‘Something is fucked up here,’” Knapp says. “These are IP addresses from overseas. It really undermines the review platform when things like this are allowed to happen.”

Continue Reading

Tech

These creepy fake humans herald a new age in AI

Published

on


Once viewed as less desirable than real data, synthetic data is now seen by some as a panacea. Real data is messy and riddled with bias. New data privacy regulations make it hard to collect. By contrast, synthetic data is pristine and can be used to build more diverse data sets. You can produce perfectly labeled faces, say, of different ages, shapes, and ethnicities to build a face-detection system that works across populations.

But synthetic data has its limitations. If it fails to reflect reality, it could end up producing even worse AI than messy, biased real-world data—or it could simply inherit the same problems. “What I don’t want to do is give the thumbs up to this paradigm and say, ‘Oh, this will solve so many problems,’” says Cathy O’Neil, a data scientist and founder of the algorithmic auditing firm ORCAA. “Because it will also ignore a lot of things.”

Realistic, not real

Deep learning has always been about data. But in the last few years, the AI community has learned that good data is more important than big data. Even small amounts of the right, cleanly labeled data can do more to improve an AI system’s performance than 10 times the amount of uncurated data, or even a more advanced algorithm.

That changes the way companies should approach developing their AI models, says Datagen’s CEO and cofounder, Ofir Chakon. Today, they start by acquiring as much data as possible and then tweak and tune their algorithms for better performance. Instead, they should be doing the opposite: use the same algorithm while improving on the composition of their data.

Datagen also generates fake furniture and indoor environments to put its fake humans in context.

DATAGEN

But collecting real-world data to perform this kind of iterative experimentation is too costly and time intensive. This is where Datagen comes in. With a synthetic data generator, teams can create and test dozens of new data sets a day to identify which one maximizes a model’s performance.

To ensure the realism of its data, Datagen gives its vendors detailed instructions on how many individuals to scan in each age bracket, BMI range, and ethnicity, as well as a set list of actions for them to perform, like walking around a room or drinking a soda. The vendors send back both high-fidelity static images and motion-capture data of those actions. Datagen’s algorithms then expand this data into hundreds of thousands of combinations. The synthesized data is sometimes then checked again. Fake faces are plotted against real faces, for example, to see if they seem realistic.

Datagen is now generating facial expressions to monitor driver alertness in smart cars, body motions to track customers in cashier-free stores, and irises and hand motions to improve the eye- and hand-tracking capabilities of VR headsets. The company says its data has already been used to develop computer-vision systems serving tens of millions of users.

It’s not just synthetic humans that are being mass-manufactured. Click-Ins is a startup that uses synthetic AI to perform automated vehicle inspections. Using design software, it re-creates all car makes and models that its AI needs to recognize and then renders them with different colors, damages, and deformations under different lighting conditions, against different backgrounds. This lets the company update its AI when automakers put out new models, and helps it avoid data privacy violations in countries where license plates are considered private information and thus cannot be present in photos used to train AI.

Click-Ins renders cars of different makes and models against various backgrounds.

CLICK-INS

Mostly.ai works with financial, telecommunications, and insurance companies to provide spreadsheets of fake client data that let companies share their customer database with outside vendors in a legally compliant way. Anonymization can reduce a data set’s richness yet still fail to adequately protect people’s privacy. But synthetic data can be used to generate detailed fake data sets that share the same statistical properties as a company’s real data. It can also be used to simulate data that the company doesn’t yet have, including a more diverse client population or scenarios like fraudulent activity.

Proponents of synthetic data say that it can help evaluate AI as well. In a recent paper published at an AI conference, Suchi Saria, an associate professor of machine learning and health care at Johns Hopkins University, and her coauthors demonstrated how data-generation techniques could be used to extrapolate different patient populations from a single set of data. This could be useful if, for example, a company only had data from New York City’s more youthful population but wanted to understand how its AI performs on an aging population with higher prevalence of diabetes. She’s now starting her own company, Bayesian Health, which will use this technique to help test medical AI systems.

The limits of faking it

But is synthetic data overhyped?

When it comes to privacy, “just because the data is ‘synthetic’ and does not directly correspond to real user data does not mean that it does not encode sensitive information about real people,” says Aaron Roth, a professor of computer and information science at the University of Pennsylvania. Some data generation techniques have been shown to closely reproduce images or text found in the training data, for example, while others are vulnerable to attacks that make them fully regurgitate that data.

This might be fine for a firm like Datagen, whose synthetic data isn’t meant to conceal the identity of the individuals who consented to be scanned. But it would be bad news for companies that offer their solution as a way to protect sensitive financial or patient information.

Continue Reading

Copyright © 2020 Diliput News.