Connect with us

Tech

UBI is dead; long live guaranteed income

Published

on

Stockton Mayor Michael Tubbs


Buoyed by this success, Tubbs started an organization, Mayors for Guaranteed Income, to expand his city’s pilot. To date, 42 mayors across America have signed on, and additional projects are now being run in towns and cities from Hudson, New York, and Gary, Indiana, to Compton, California. 

Since the results of SEED’s first year were released in March, Tubbs has often been asked what he learned from it. “I am tempted to say ‘Nothing,’” he told me in late March.

He means the pilot didn’t tell him anything that wasn’t already obvious to him: he knew from personal experience that many stereotypes about poor people (especially poor Black people) are not, as he put it, “rooted in reality.” 

Tubbs was born in Stockton to a teenage mother and an incarcerated father. He attended Stanford on a need-based scholarship, and returned home after graduation. Soon he was elected to City Council, before becoming mayor when he was just 26. 

Tubbs didn’t need the data to know he could trust people to make rational financial decisions, but the experience did help him “learn the power of narrative.” 

He recognized that “sometimes ideology, sometimes racism,” colors people’s perceptions. Part of his job as mayor became to “illustrate what’s real and what’s not,” he says. He saw the chance to “illustrate what’s actually backed by data and what’s backed by bias.” 

The need to change narratives through research and evidence was also apparent to Nyandoro, of Magnolia Mother’s Trust. A few days before the third cohort began receiving money, I asked her what research questions she hoped this new cycle would answer.

“We have more than enough data now to prove that cash works,” she told me. Now her question was not how cash would affect low-income individuals but, rather, “What is the data or talking points that we need to get to the policymakers … to move their hearts?” What evidence could be sufficient to make guaranteed income a federal-level policy? 

As it turned out, what made the difference wasn’t more research but a global pandemic. 

The pandemic effect

When stay-at-home orders closed many businesses—and destroyed jobs, especially for already vulnerable low-income workers—the chasm of American inequality became harder to ignore. Food lines stretched for miles. Millions of Americans faced eviction. Students without internet access at home resorted to sitting in public parking lots to hook into Wi-Fi so they could attend classes online. 

This was all worse for people of color. By February 2021, Black and Hispanic women, who make up only a third of the female labor force, accounted for nearly half of women’s pandemic job losses. Black men, meanwhile, were unemployed at almost double the rate of other ethnic groups, according to Census data analyzed by the Pew Research Center. 

All this also changed the conversation about the costs of guaranteed income programs. When the comparison was between basic income and the status quo, they’d been seen as too expensive to be realistic. But in the face of the recession caused by the pandemic, relief packages were suddenly seen as necessary to jump-start the American economy or, at the very least, avoid what Jerome Powell, then chairman of the Federal Reserve, called a “downward spiral” with “tragic” outcomes.

“Covid-19 really illustrated all the things that those of us who actually work with, and work for, and are in relationship with, folks who are economically insecure know.”

“Covid-19 really illustrated all the things that those of us who actually work with, and work for, and are in relationship with, folks who are economically insecure know,” says Tubbs. That is, poverty was not an issue of “the people. It’s with the systems. It’s with the policies.”

Stimulus payments and increased unemployment benefits—that is, direct cash transfers to Americans with no conditions attached—passed with huge public support. And earlier this year, an expanded Child and Dependent Tax Credit (CTC) was introduced that provides up to $3,600 per child, paid in monthly installments, to most American families. 

This new benefit, which is set to last for a year, is available even to families that don’t make enough money to pay income tax; they had been left out of previous versions of the tax credit. And by sending monthly payments of up to $300 per child, rather than a single rebate at the end of the year, it gives families a better chance to plan and budget. It is expected to cut child poverty in half. 

Washington might not have used the language of guaranteed income, but these programs fit the definition.

The CTC is “a game changer,” says Natalie Foster, a cofounder of the Economic Security Project, which funded many of the guaranteed income pilots, including both SEED and Mayors for Guaranteed Income. It “overturns decades of punitive welfare policies in America,” she says, and sets the stage for more permanent policies. 

Whereas her organization originally thought it might take a decade of data from city-based pilot programs to “inform federal policymaking,” the CTC means that guaranteed income has, at least temporarily, arrived. 

The stimulus bills and CTC also make Tubbs “more bullish now than ever” that guaranteed income could soon become a permanent fixture of federal policy. 

“We live in a time of pandemics,” he says. “It’s not just covid-19. It’s an earthquake next month. It’s wildfires. All these things are happening all the time—not even mentioning automation. We have to have the ability for our folks to build economic resilience.”

The responsibility for poverty is “with the policies,” says Michael Tubbs, the former mayor of Stockton, California.

AP PHOTO/RICH PEDRONCELLI, FILE

But even if the rhetoric has shifted away from the technocratic concept of UBI, Silicon Valley’s interest in universality hasn’t gone away. Last April, Jack Dorsey announced a new philanthropic initiative, Start Small LLC, to give away $1 billion. 

The donations would focus initially on covid-19 relief and then, after the pandemic, shift to universal basic income and girls’ education, he said. Putting money toward these causes, Dorsey explained, represented “the best long-term solutions to the existential problems facing the world.” 

Despite its announced focus on universal basic income, StartSmall has become one of the largest funders of guaranteed income. It donated $18 million to Mayors for Guaranteed Income, $15 million to the Open Research Lab (previously known as the Y Combinator basic income experiment), $7 million to Humanity Forward, Andrew Yang’s foundation, and most recently $3.5 million to establish a Cash Transfer Lab at New York University to conduct more research on the issue. 

Yang, now running for mayor of New York City, has also shifted away from his focus on universality. Rather than sending $1,000 checks every month to everyone, he now advocates for a guaranteed minimum income of $2,000 per year for New Yorkers living in extreme poverty. 

Tubbs claims some credit for these shifts. He recalls a conversation with Dorsey in which he told the billionaire, “It’s gonna take time to get to universality, but it’s urgent that we do guaranteed income… So look, we’re not going to … test a UBI. We can test the income guarantee. Let’s start there.”

If his donations are any indication, Dorsey took Tubbs’s words to heart. What’s still unclear, however, is whether he and other tech leaders see guaranteed income as a stepping-stone to UBI or as an end in itself. (Neither Dorsey nor Start Small staff responded to requests for an interview.)

Scott Santens, one of the earliest “basic income bros,” believes that the tech sector’s initial interest in UBI as a fix for job loss is still relevant. The pandemic has led to an increase in sales of automation and robots, he says, pointing to reports that inquiries about Amazon’s call center tech have increased, as have purchases of warehouse robots to replace warehouse workers. 

Meanwhile, Sam Altman, who helped kick off Y Combinator’s UBI experiment before leaving to head the artificial-intelligence startup OpenAI, wrote a recent manifesto about the situation. In it, he urged that we remain focused on the bigger picture: even if the pandemic has caused a short-term shock, it is technology—specifically, artificial intelligence—that will have the greatest impact on employment over time. 

Altman called for the UBI to be funded by a 2.5% tax on businesses. “The best way to improve capitalism is to enable everyone to benefit from it directly as an equity owner,“ he wrote.

But would “everyone” include people of color, who are already being harmed at disproportionate levels by AI’s biases? And could a dividend paid out from the spoils of artificial intelligence make up for that harm? Altman’s manifesto notably leaves out any mention of race. 

When reached for comment, he sent a statement through an OpenAI representative saying, “We must build AI in a way that doesn’t cause more harm to traditionally marginalized communities. In addition to building the technology in an equitable and just way, we must also find a way to share the benefits broadly. These are independently important issues.” 



Tech

Transforming health care at the edge

Published

on

Transforming health care at the edge


Edge computing, through on-site sensors and devices, as well as last-mile edge equipment that connects to those devices, allows data processing and analysis to happen close to the digital interaction. Rather than using centralized cloud or on-premises infrastructure, these distributed tools at the edge offer the same quality of data processing but without latency issues or massive bandwidth use.

“The real-time feedback loop required for things like remote monitoring of a patient’s heart and respiratory metrics is only possible with something like edge computing,” Mirchandani says. “If all that information took several seconds or a minute to get processed somewhere else, it’s useless.”

Opportunities and challenges at the health-care edge

The sky’s the limit when it comes to the opportunities to use edge computing in health care, says Paul Savill, senior vice president of product management and services at technology company Lumen, especially as health systems work to reduce costs by shifting testing and treatment out of hospitals and into clinics, retail locations, and homes.

“A lot of patient care now happens at retail drugstores, whether it is blood work, scans, or other assessments,” Savill says. “With edge computing capabilities and tools, that can now take place on-site, on a real-time basis, so you don’t have to send things to a lab and wait a day or week to get results back.”

The arrival of 5G technology, the new standard for broadband cellular networks, will also drive opportunities, as it works with edge computing tools to support the internet of things and machine learning, adds Mirchandani. “It’s the combination of this super-low-latency network and computing at the edge that will help these powerful new applications take flight,” he says. Take robotic surgeries—it’s crucial for the surgeon to have nearly instant, sub-millisecond sensory feedback. “That’s not possible in any other way than through technologies such as edge computing and 5G,” he says.

“A lot of patient care now happens at retail drugstores. With edge computing capabilities and tools, that can now take place on-site, on a real-time basis, so you don’t have to send things to a lab and wait a day or week to get results back.”

Paul Savill, Senior Vice President, Product Management and Services, Lumen

Data security, however, is a particular challenge for any health-care-related technology because of HIPAA, the US health information privacy law, and other regulations. The real-time data transmission edge computing provides will be under significant scrutiny, Mirchandani explains, which may affect widespread adoption. “There needs to be an almost 100% guarantee that the information you generate from a heart monitor, pulse oximeter, blood glucose monitor, or any other device will not be intercepted or disrupted in any way,” he says.

Still, edge computing technologies, paired with the right security standards and tools, are often more secure and reliable than the on-premises environment a business could implement on its own, Savill points out. “It’s about understanding the entire threat landscape down to the network level.”

Continue Reading

Tech

Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof

Published

on

Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof


Smith’s Yelp reviews were shut down after the sudden flurry of activity on its page, which the company labels “unusual activity alerts,” a stopgap measure for both the business and Yelp to filter through a flood of reviews and pick out which are spam and which aren’t. Noorie Malik, Yelp’s vice president of user operations, said Yelp has a “team of moderators” that investigate pages that get an unusual amount of traffic. “After we’ve seen activity dramatically decrease or stop, we will then clean up the page so that only firsthand consumer experiences are reflected,” she said in a statement.

It’s a practice that Yelp has had to deploy more often over the course of the pandemic: According to Yelp’s 2020 Trust & Safety Report, the company saw a 206% increase over 2019 levels in unusual activity alerts. “Since January 2021, we’ve placed more than 15 unusual activity alerts on business pages related to a business’s stance on covid-19 vaccinations,” said Malik.

The majority of those cases have been since May, like the gay bar C.C. Attles in Seattle, which got an alert from Yelp after it made patrons show proof of vaccination at the door. Earlier this month, Moe’s Cantina in Chicago’s River North neighborhood got spammed after it attempted to isolate vaccinated customers from unvaccinated ones.

Spamming a business with one-star reviews is not a new tactic. In fact, perhaps the best-known case is Colorado’s Masterpiece bakery, which won a 2018 Supreme Court battle for refusing to make a wedding cake for a same-sex couple, after which it got pummeled by one-star reviews. “People are still writing fake reviews. People will always write fake reviews,” Liu says.

But he adds that today’s online audience know that platforms use algorithms to detect and flag problematic words, so bad actors can mask their grievances by blaming poor restaurant service like a more typical negative review to ensure the rating stays up — and counts.

That seems to have been the case with Knapp’s bar. His Yelp review included comments like “There was hair in my food” or alleged cockroach sightings. “Really ridiculous, fantastic shit,” Knapp says. “If you looked at previous reviews, you would understand immediately that this doesn’t make sense.” 

Liu also says there is a limit to how much Yelp can improve their spam detection, since natural language — or the way we speak, read, and write — “is very tough for computer systems to detect.” 

But Liu doesn’t think putting a human being in charge of figuring out which reviews are spam or not will solve the problem. “Human beings can’t do it,” he says. “Some people might get it right, some people might get it wrong. I have fake reviews on my webpage and even I can’t tell which are real or not.”

You might notice that I’ve only mentioned Yelp reviews thus far, despite the fact that Google reviews — which appear in the business description box on the right side of the Google search results page under “reviews” — is arguably more influential. That’s because Google’s review operations are, frankly, even more mysterious. 

While businesses I spoke to said Yelp worked with them on identifying spam reviews, none of them had any luck with contacting Google’s team. “You would think Google would say, ‘Something is fucked up here,’” Knapp says. “These are IP addresses from overseas. It really undermines the review platform when things like this are allowed to happen.”

Continue Reading

Tech

These creepy fake humans herald a new age in AI

Published

on


Once viewed as less desirable than real data, synthetic data is now seen by some as a panacea. Real data is messy and riddled with bias. New data privacy regulations make it hard to collect. By contrast, synthetic data is pristine and can be used to build more diverse data sets. You can produce perfectly labeled faces, say, of different ages, shapes, and ethnicities to build a face-detection system that works across populations.

But synthetic data has its limitations. If it fails to reflect reality, it could end up producing even worse AI than messy, biased real-world data—or it could simply inherit the same problems. “What I don’t want to do is give the thumbs up to this paradigm and say, ‘Oh, this will solve so many problems,’” says Cathy O’Neil, a data scientist and founder of the algorithmic auditing firm ORCAA. “Because it will also ignore a lot of things.”

Realistic, not real

Deep learning has always been about data. But in the last few years, the AI community has learned that good data is more important than big data. Even small amounts of the right, cleanly labeled data can do more to improve an AI system’s performance than 10 times the amount of uncurated data, or even a more advanced algorithm.

That changes the way companies should approach developing their AI models, says Datagen’s CEO and cofounder, Ofir Chakon. Today, they start by acquiring as much data as possible and then tweak and tune their algorithms for better performance. Instead, they should be doing the opposite: use the same algorithm while improving on the composition of their data.

Datagen also generates fake furniture and indoor environments to put its fake humans in context.

DATAGEN

But collecting real-world data to perform this kind of iterative experimentation is too costly and time intensive. This is where Datagen comes in. With a synthetic data generator, teams can create and test dozens of new data sets a day to identify which one maximizes a model’s performance.

To ensure the realism of its data, Datagen gives its vendors detailed instructions on how many individuals to scan in each age bracket, BMI range, and ethnicity, as well as a set list of actions for them to perform, like walking around a room or drinking a soda. The vendors send back both high-fidelity static images and motion-capture data of those actions. Datagen’s algorithms then expand this data into hundreds of thousands of combinations. The synthesized data is sometimes then checked again. Fake faces are plotted against real faces, for example, to see if they seem realistic.

Datagen is now generating facial expressions to monitor driver alertness in smart cars, body motions to track customers in cashier-free stores, and irises and hand motions to improve the eye- and hand-tracking capabilities of VR headsets. The company says its data has already been used to develop computer-vision systems serving tens of millions of users.

It’s not just synthetic humans that are being mass-manufactured. Click-Ins is a startup that uses synthetic AI to perform automated vehicle inspections. Using design software, it re-creates all car makes and models that its AI needs to recognize and then renders them with different colors, damages, and deformations under different lighting conditions, against different backgrounds. This lets the company update its AI when automakers put out new models, and helps it avoid data privacy violations in countries where license plates are considered private information and thus cannot be present in photos used to train AI.

Click-Ins renders cars of different makes and models against various backgrounds.

CLICK-INS

Mostly.ai works with financial, telecommunications, and insurance companies to provide spreadsheets of fake client data that let companies share their customer database with outside vendors in a legally compliant way. Anonymization can reduce a data set’s richness yet still fail to adequately protect people’s privacy. But synthetic data can be used to generate detailed fake data sets that share the same statistical properties as a company’s real data. It can also be used to simulate data that the company doesn’t yet have, including a more diverse client population or scenarios like fraudulent activity.

Proponents of synthetic data say that it can help evaluate AI as well. In a recent paper published at an AI conference, Suchi Saria, an associate professor of machine learning and health care at Johns Hopkins University, and her coauthors demonstrated how data-generation techniques could be used to extrapolate different patient populations from a single set of data. This could be useful if, for example, a company only had data from New York City’s more youthful population but wanted to understand how its AI performs on an aging population with higher prevalence of diabetes. She’s now starting her own company, Bayesian Health, which will use this technique to help test medical AI systems.

The limits of faking it

But is synthetic data overhyped?

When it comes to privacy, “just because the data is ‘synthetic’ and does not directly correspond to real user data does not mean that it does not encode sensitive information about real people,” says Aaron Roth, a professor of computer and information science at the University of Pennsylvania. Some data generation techniques have been shown to closely reproduce images or text found in the training data, for example, while others are vulnerable to attacks that make them fully regurgitate that data.

This might be fine for a firm like Datagen, whose synthetic data isn’t meant to conceal the identity of the individuals who consented to be scanned. But it would be bad news for companies that offer their solution as a way to protect sensitive financial or patient information.

Continue Reading

Copyright © 2020 Diliput News.