But there are tens of thousands of Chinese characters, and a 5-by-7 grid was too small to make them legible. Chinese required a grid of 16 by 16 or larger—i.e., at least 32 bytes of memory (256 bits) per character. Were one to imagine a font containing 70,000 low-resolution Chinese characters, the total memory requirement would exceed two megabytes. Even a font containing only 8,000 of the most common Chinese characters would require approximately 256 kilobytes just to store the bitmaps. That was four times the total memory capacity of most off-the-shelf personal computers in the early 1980s.
As serious as these memory challenges were, the most taxing problems confronting low-res Chinese font production in the 1970s and 1980s were ones of aesthetics and design. Long before anyone sat down with a program like Gridmaster, the lion’s share of work took place off the computer, using pen, paper, and correction fluid.
Designers spent years trying to fashion bitmaps that fulfilled the low-memory requirements and preserved a modicum of calligraphic elegance. Among those who created this character set, whether by hand-drawing drafts of bitmaps for specific Chinese characters or digitizing them using Gridmaster, were Lily Huan-Ming Ling (凌焕銘) and Ellen Di Giovanni.
The core problem that designers faced was translating between two radically different ways of writing Chinese: the hand-drawn character, produced with pen or brush, and the bitmap glyph, produced with an array of pixels arranged on two axes. Designers had to decide how (and whether) they were going to try to re-create certain orthographic features of handwritten Chinese, such as entrance strokes, stroke tapering, and exit strokes.
In the case of the Sinotype III font, the process of designing and digitizing low-resolution Chinese bitmaps was thoroughly documented. One of the most fascinating archival sources from this period is a binder full of grids with hand-drawn hash marks all over them—sketches that would later be digitized into bitmaps for many thousands of Chinese characters. Each of these characters was carefully laid out and, in most cases, edited by Louis Rosenblum and GARF, using correction fluid to erase any “bits” the editor disagreed with. Over top of the initial set of green hash marks, then, a second set of red hash marks indicated the “final” draft. Only then did the work of data entry begin.
Given the sheer number of bitmaps that the team needed to design—at least 3,000 (and ideally many more) if the machine had any hopes of fulfilling consumers’ needs—one might assume that the designers looked for ways to streamline their work. One way they could have done this, for example, would have been to duplicate Chinese radicals—the base components of a character—when they appeared in roughly the same location, size, and orientation from one character to another. When producing the many dozens of common Chinese characters containing the “woman radical” (女), for example, the team at GARF could have (and, in theory, should have) created just one standard bitmap, and then replicated it within every character in which that radical appeared.
No such mechanistic decisions were made, however, as the archival materials show. On the contrary, Louis Rosenblum insisted that designers adjust each of these components—often in nearly imperceptible ways—to ensure they were in harmony with the overall character in which they appeared.
In the bitmaps for juan (娟, graceful) and mian (娩, to deliver), for example—each of which contains the woman radical—that radical has been changed ever so slightly. In the character juan, the middle section of the woman radical occupies a horizontal span of six pixels, as compared with five pixels in the character mian. At the same time, however, the bottom-right curve of the woman radical extends outward just one pixel further in the character mian, and in the character juan that stroke does not extend at all.
Across the entire font, this level of precision was the rule rather than the exception.
When we juxtapose the draft bitmap drawings against their final forms, we see that more changes have been made. In the draft version of luo (罗, collect, net), for example, the bottom-left stroke extends downward at a perfect 45° angle before tapering into the digitized version of an outstroke. In the final version, however, the curve has been “flattened,” beginning at 45° but then leveling out.
Despite the seemingly small space in which designers had to work, they had to make a staggering number of choices. And every one of these decisions affected every other decision they made for a specific character, since adding even one pixel often changed the overall horizontal and vertical balance.
The unforgiving size of the grid impinged upon the designers’ work in other, unexpected ways. We see this most clearly in the devilish problem of achieving symmetry. Symmetrical layouts—which abound in Chinese characters—were especially difficult to represent in low-resolution frameworks because, by the rules of mathematics, creating symmetry requires odd-sized spatial zones. Bitmap grids with even dimensions (such as the 16-by-16 grid) made symmetry impossible. GARF managed to achieve symmetry by, in many cases, using only a portion of the overall grid: just a 15-by-15 region within the overall 16-by-16 grid. This reduced the amount of usable space even further.
The story becomes even more complex when we begin to compare the bitmap fonts created by different companies or creators for different projects. Consider the water radical (氵) as it appeared in the Sinotype III font (below and on the right), as opposed to another early Chinese font created by H.C. Tien (on the left), a Chinese-American psychotherapist and entrepreneur who experimented with Chinese computing in the 1970s and 1980s.
As minor as the above examples might seem, each represented yet another decision (among thousands) that the GARF design team had to make, whether during the drafting or the digitization phase.
Low resolution did not stay “low” for long, of course. Computing advances gave rise to ever denser bitmaps, ever faster processing speeds, and ever diminishing costs for memory. In our current age of 4K resolution, retina displays, and more, it may be hard to appreciate the artistry—both aesthetic and technical—that went into the creation of early Chinese bitmap fonts, as limited as they were. But it was problem-solving like this that ultimately made computing, new media, and the internet accessible to one-sixth of the global population.
Transforming health care at the edge
Edge computing, through on-site sensors and devices, as well as last-mile edge equipment that connects to those devices, allows data processing and analysis to happen close to the digital interaction. Rather than using centralized cloud or on-premises infrastructure, these distributed tools at the edge offer the same quality of data processing but without latency issues or massive bandwidth use.
“The real-time feedback loop required for things like remote monitoring of a patient’s heart and respiratory metrics is only possible with something like edge computing,” Mirchandani says. “If all that information took several seconds or a minute to get processed somewhere else, it’s useless.”
Opportunities and challenges at the health-care edge
The sky’s the limit when it comes to the opportunities to use edge computing in health care, says Paul Savill, senior vice president of product management and services at technology company Lumen, especially as health systems work to reduce costs by shifting testing and treatment out of hospitals and into clinics, retail locations, and homes.
“A lot of patient care now happens at retail drugstores, whether it is blood work, scans, or other assessments,” Savill says. “With edge computing capabilities and tools, that can now take place on-site, on a real-time basis, so you don’t have to send things to a lab and wait a day or week to get results back.”
The arrival of 5G technology, the new standard for broadband cellular networks, will also drive opportunities, as it works with edge computing tools to support the internet of things and machine learning, adds Mirchandani. “It’s the combination of this super-low-latency network and computing at the edge that will help these powerful new applications take flight,” he says. Take robotic surgeries—it’s crucial for the surgeon to have nearly instant, sub-millisecond sensory feedback. “That’s not possible in any other way than through technologies such as edge computing and 5G,” he says.
Paul Savill, Senior Vice President, Product Management and Services, Lumen
Data security, however, is a particular challenge for any health-care-related technology because of HIPAA, the US health information privacy law, and other regulations. The real-time data transmission edge computing provides will be under significant scrutiny, Mirchandani explains, which may affect widespread adoption. “There needs to be an almost 100% guarantee that the information you generate from a heart monitor, pulse oximeter, blood glucose monitor, or any other device will not be intercepted or disrupted in any way,” he says.
Still, edge computing technologies, paired with the right security standards and tools, are often more secure and reliable than the on-premises environment a business could implement on its own, Savill points out. “It’s about understanding the entire threat landscape down to the network level.”
Anti-vaxxers are weaponizing Yelp to punish bars that require vaccine proof
Smith’s Yelp reviews were shut down after the sudden flurry of activity on its page, which the company labels “unusual activity alerts,” a stopgap measure for both the business and Yelp to filter through a flood of reviews and pick out which are spam and which aren’t. Noorie Malik, Yelp’s vice president of user operations, said Yelp has a “team of moderators” that investigate pages that get an unusual amount of traffic. “After we’ve seen activity dramatically decrease or stop, we will then clean up the page so that only firsthand consumer experiences are reflected,” she said in a statement.
It’s a practice that Yelp has had to deploy more often over the course of the pandemic: According to Yelp’s 2020 Trust & Safety Report, the company saw a 206% increase over 2019 levels in unusual activity alerts. “Since January 2021, we’ve placed more than 15 unusual activity alerts on business pages related to a business’s stance on covid-19 vaccinations,” said Malik.
The majority of those cases have been since May, like the gay bar C.C. Attles in Seattle, which got an alert from Yelp after it made patrons show proof of vaccination at the door. Earlier this month, Moe’s Cantina in Chicago’s River North neighborhood got spammed after it attempted to isolate vaccinated customers from unvaccinated ones.
Spamming a business with one-star reviews is not a new tactic. In fact, perhaps the best-known case is Colorado’s Masterpiece bakery, which won a 2018 Supreme Court battle for refusing to make a wedding cake for a same-sex couple, after which it got pummeled by one-star reviews. “People are still writing fake reviews. People will always write fake reviews,” Liu says.
But he adds that today’s online audience know that platforms use algorithms to detect and flag problematic words, so bad actors can mask their grievances by blaming poor restaurant service like a more typical negative review to ensure the rating stays up — and counts.
That seems to have been the case with Knapp’s bar. His Yelp review included comments like “There was hair in my food” or alleged cockroach sightings. “Really ridiculous, fantastic shit,” Knapp says. “If you looked at previous reviews, you would understand immediately that this doesn’t make sense.”
Liu also says there is a limit to how much Yelp can improve their spam detection, since natural language — or the way we speak, read, and write — “is very tough for computer systems to detect.”
But Liu doesn’t think putting a human being in charge of figuring out which reviews are spam or not will solve the problem. “Human beings can’t do it,” he says. “Some people might get it right, some people might get it wrong. I have fake reviews on my webpage and even I can’t tell which are real or not.”
You might notice that I’ve only mentioned Yelp reviews thus far, despite the fact that Google reviews — which appear in the business description box on the right side of the Google search results page under “reviews” — is arguably more influential. That’s because Google’s review operations are, frankly, even more mysterious.
While businesses I spoke to said Yelp worked with them on identifying spam reviews, none of them had any luck with contacting Google’s team. “You would think Google would say, ‘Something is fucked up here,’” Knapp says. “These are IP addresses from overseas. It really undermines the review platform when things like this are allowed to happen.”
These creepy fake humans herald a new age in AI
Once viewed as less desirable than real data, synthetic data is now seen by some as a panacea. Real data is messy and riddled with bias. New data privacy regulations make it hard to collect. By contrast, synthetic data is pristine and can be used to build more diverse data sets. You can produce perfectly labeled faces, say, of different ages, shapes, and ethnicities to build a face-detection system that works across populations.
But synthetic data has its limitations. If it fails to reflect reality, it could end up producing even worse AI than messy, biased real-world data—or it could simply inherit the same problems. “What I don’t want to do is give the thumbs up to this paradigm and say, ‘Oh, this will solve so many problems,’” says Cathy O’Neil, a data scientist and founder of the algorithmic auditing firm ORCAA. “Because it will also ignore a lot of things.”
Realistic, not real
Deep learning has always been about data. But in the last few years, the AI community has learned that good data is more important than big data. Even small amounts of the right, cleanly labeled data can do more to improve an AI system’s performance than 10 times the amount of uncurated data, or even a more advanced algorithm.
That changes the way companies should approach developing their AI models, says Datagen’s CEO and cofounder, Ofir Chakon. Today, they start by acquiring as much data as possible and then tweak and tune their algorithms for better performance. Instead, they should be doing the opposite: use the same algorithm while improving on the composition of their data.
But collecting real-world data to perform this kind of iterative experimentation is too costly and time intensive. This is where Datagen comes in. With a synthetic data generator, teams can create and test dozens of new data sets a day to identify which one maximizes a model’s performance.
To ensure the realism of its data, Datagen gives its vendors detailed instructions on how many individuals to scan in each age bracket, BMI range, and ethnicity, as well as a set list of actions for them to perform, like walking around a room or drinking a soda. The vendors send back both high-fidelity static images and motion-capture data of those actions. Datagen’s algorithms then expand this data into hundreds of thousands of combinations. The synthesized data is sometimes then checked again. Fake faces are plotted against real faces, for example, to see if they seem realistic.
Datagen is now generating facial expressions to monitor driver alertness in smart cars, body motions to track customers in cashier-free stores, and irises and hand motions to improve the eye- and hand-tracking capabilities of VR headsets. The company says its data has already been used to develop computer-vision systems serving tens of millions of users.
It’s not just synthetic humans that are being mass-manufactured. Click-Ins is a startup that uses synthetic AI to perform automated vehicle inspections. Using design software, it re-creates all car makes and models that its AI needs to recognize and then renders them with different colors, damages, and deformations under different lighting conditions, against different backgrounds. This lets the company update its AI when automakers put out new models, and helps it avoid data privacy violations in countries where license plates are considered private information and thus cannot be present in photos used to train AI.
Mostly.ai works with financial, telecommunications, and insurance companies to provide spreadsheets of fake client data that let companies share their customer database with outside vendors in a legally compliant way. Anonymization can reduce a data set’s richness yet still fail to adequately protect people’s privacy. But synthetic data can be used to generate detailed fake data sets that share the same statistical properties as a company’s real data. It can also be used to simulate data that the company doesn’t yet have, including a more diverse client population or scenarios like fraudulent activity.
Proponents of synthetic data say that it can help evaluate AI as well. In a recent paper published at an AI conference, Suchi Saria, an associate professor of machine learning and health care at Johns Hopkins University, and her coauthors demonstrated how data-generation techniques could be used to extrapolate different patient populations from a single set of data. This could be useful if, for example, a company only had data from New York City’s more youthful population but wanted to understand how its AI performs on an aging population with higher prevalence of diabetes. She’s now starting her own company, Bayesian Health, which will use this technique to help test medical AI systems.
The limits of faking it
But is synthetic data overhyped?
When it comes to privacy, “just because the data is ‘synthetic’ and does not directly correspond to real user data does not mean that it does not encode sensitive information about real people,” says Aaron Roth, a professor of computer and information science at the University of Pennsylvania. Some data generation techniques have been shown to closely reproduce images or text found in the training data, for example, while others are vulnerable to attacks that make them fully regurgitate that data.
This might be fine for a firm like Datagen, whose synthetic data isn’t meant to conceal the identity of the individuals who consented to be scanned. But it would be bad news for companies that offer their solution as a way to protect sensitive financial or patient information.