Connect with us

Tech

Making better decisions with big data personas

Published

on

Making better decisions with big data personas


A persona is an imaginary figure representing a segment of real people, and it is a communicative design technique aimed at enhanced user understanding. Through several decades of use, personas were data structures, static frameworks user attributes with no interactivity. A persona was a means to organize data about the imaginary person and to present information to the decision-makers. This wasn’t really actionable for most situations.

How personas and data work together

With increasing analytics data, personas can now be generated using big data and algorithmic approaches. This integration of personas and analytics offers impactful opportunities to shift personas from flat files of data presentation to interactive interfaces for analytics systems. These personas analytics systems provide both the empathic connection of personas and the rational insights of analytics. With persona analytics systems, the persona is no longer a static, flat file. Instead, they are operational modes of accessing user data. Combining personas and analytics also makes the user data less challenging to employ for those lacking the skills or desire to work with complex analytics. Another advantage of persona analytics systems is that one can create hundreds of data-driven personas to reflect the various behavioral and demographic nuances in the underlying user population.

A “personas as interfaces” approach offers the benefits of both personas and analytics systems and addresses each’s shortcomings. Transforming both the persona and analytics creation process, personas as interfaces provide both theoretical and practical implications for design, marketing, advertising, health care, and human resources, among other domains.

This persona as interface approach is the foundation of the persona analytics system, Automatic Persona Generation (APG). In pushing advancements of both persona and analytics conceptualization, development, and use, APG presents a multi-layered full-stack integration affording three levels of user data presentation, which are (a) the conceptual persona, (b) the analytical metrics, and (c) the foundational data.

APG generates casts of personas representing the user population, with each segment having a persona. Relying on regular data collection intervals, data-driven personas enrich the traditional persona with additional elements, such as user loyalty, sentiment analysis, and topics of interest, which are features requested by APG customers.

Leveraging intelligence system design concepts, APG identifies unique behavioral patterns of user interactions with products (i.e., these can be products, services, content, interface features, etc.) and then associates these unique patterns to demographic groups based on the strength of association to the unique pattern. After obtaining a grouped interaction matrix, we apply matrix factorization or other algorithms for identifying latent user interaction. Matrix factorization and related algorithms are particularly suited for reducing the dimensionality of large datasets by discerning latent factors.

How APG data-driven personas work

APG enriches the user segments produced by algorithms via adding an appropriate name, picture, social media comments, and related demographic attributes (e.g., marital status, educational level, occupation, etc.) via querying the audience profiles of prominent social media platforms. APG has an internal meta-tagged database of thousand of purchased copyright photos that are age, gender, and ethnically appropriate. The system also has an internal database of hundreds of thousands of names that are also age, gender, and ethnically appropriate. For example, for a persona of an Indian female in her twenties, APG automatically selects a popular name for females twenty years ago in India. The APG data-driven personas are then displayed to the users from the organization via the interactive online system.

APG employs the foundational user data that the system algorithms act upon, transforming this data into information about users. This algorithmic processing outcome is actionable metrics and measures about the user population (i.e., percentages, probabilities, weights, etc.) of the type that one would typically see in industry-standard analytics packages. Employing these actionable metrics is the next level of abstraction taken by APG. The result is a persona analytics system capable of presenting user insights at different granularity levels, with levels both integrated and appropriate to the task.

For example, C-level executives may want a high-level view of the users for which personas would be applicable. Operational managers may want a probabilistic view for which the analytics would appropriate. The implementers need to take direct user action, such as for a marketing campaign, for which the individual user data is more suitable.

Each level of the APG can be broken down as follows:

Conceptual level, personas. The highest level of abstraction, the conceptual level, is the set of personas that APG generates from the data using the method described above, with a default of ten personas. However, APG theoretically can generate as many personas as needed. The persona has nearly all the typical attributes that one finds in traditional flat-file persona profiles. However, in APG, personas as interfaces allow for dramatically increased interactivity in leveraging personas within organizations. Interactivity is provided such that the decision-maker can alter the default number to generate more or fewer personas, with the system currently set for between five and 15 personas. The system can allow for searching a set of personas or leveraging analytics to predict persona interests.

Analytics level: percentages, probabilities, and weights. At the analytics level, APG personas act as interfaces to the underlying information and data used to create the personas. The specific information may vary somewhat by the data source. Still, the analytics level will reflect the metrics and measures generated from the foundational user data and create the personas. In APG, the personas provide affordance to the various analytics information via clickable icons on the persona interface. For example, APG displays the percentage of the entire user population that a particular persona is representing. This analytic insight is valuable for decision-makers to determine the importance of designing or developing for a specific persona and helps address the issue of the persona’s validity in representing actual users.

User level: individual data. Leveraging the demographic metadata from the underlying factorization algorithm, decision-makers can access the specific user level (i.e., individual or aggregate) directly within APG. The numerical user data (in various forms) are the foundation of the personas and analytics.

The implications of data-driven personas

The conceptual shift of personas from flat files to personas as interfaces for enhanced user understanding opens new possibilities for interaction among decision-makers, personas, and analytics. Using data-driven personas embedded as the interfaces to analytics systems, decision-makers can, for example, imbue analysis systems with the benefit of personas to form a psychological bond, via empathy, between stakeholders and user data and still have access to the practical user numbers. There are several practical implications for managers and practitioners. Namely, personas are now actionable, as the personas accurately reflect the underlying user data. This full-stack implementation aspect has not been available with either personas or analytics previously.

APG is a fully functional system deployed with real client organizations. Please visit https://persona.qcri.org to see a demo.

This content was written by Qatar Computing Research Institute, Hamad Bin Khalifa University, a member of Qatar Foundation. It was not written by MIT Technology Review’s editorial staff.

Tech

Meet Jennifer Daniel, the woman who decides what emoji we get to use

Published

on

Meet Jennifer Daniel, the woman who decides what emoji we get to use


Emoji are now part of our language. If you’re like most people, you pepper your texts, Instagram posts, and TikTok videos with various little images to augment your words—maybe the syringe with a bit of blood dripping from it when you got your vaccination, the prayer (or high-fiving?) hands as a shortcut to “thank you,” a rosy-cheeked smiley face with jazz hands for a covid-safe hug from afar. Today’s emoji catalogue includes nearly 3,000 illustrations representing everything from emotions to food, natural phenomena, flags, and people at various stages of life.

Behind all those symbols is the Unicode Consortium, a nonprofit group of hardware and software companies aiming to make text and emoji readable and accessible to everyone. Part of their goal is to make languages look the same on all devices; a Japanese character should be typographically consistent across all media, for example. But Unicode is probably best known for being the gatekeeper of emoji: releasing them, standardizing them, and approving or rejecting new ones.

Jennifer Daniel is the first woman at the helm of the Emoji Subcommittee for the Unicode Consortium and a fierce advocate for inclusive, thoughtful emoji. She initially rose to prominence for introducing Mx. Claus, a gender-inclusive alternative to Santa and Mrs. Claus; a non-gendered person breastfeeding a non-gendered baby; and a masculine face wearing a bridal veil. 

Now she’s on a mission to bring emoji to a post-pandemic future in which they are as broadly representative as possible. That means taking on an increasingly public role, whether it’s with her popular and delightfully nerdy Substack newsletter, What Would Jennifer Do? (in which she analyzes the design process for upcoming emoji), or inviting the general public to submit concerns about emoji and speak up if they aren’t representative or accurate.

“There isn’t a precedent here,” Daniel says of her job. And to Daniel, that’s exciting not just for her but for the future of human communication.

I spoke to her about how she sees her role and the future of emoji. The interview has been lightly edited and condensed. 

What does it mean to chair the subcommittee on emoji? What do you do?

It’s not sexy. [laughs] A lot of it is managing volunteers [the committee is composed of volunteers who review applications and help in approval and design]. There’s a lot of paperwork. A lot of meetings. We meet twice a week.

I read a lot and talk to a lot of people. I recently talked to a gesture linguist to learn how people use their hands in different cultures. How do we make better hand-gesture emoji? If the image is no good or isn’t clear, it’s a dealbreaker. I’m constantly doing lots of research and consulting with different experts. I’ll be on the phone with a botanical garden about flowers, or a whale expert to get the whale emoji right, or a cardiovascular surgeon so we have the anatomy of the heart down. 

There’s an old essay by Beatrice Warde about typography. She asked if a good typeface is a bedazzled crystal goblet or a transparent one. Some would say the ornate one because it’s so fancy, and others would say the crystal goblet because you can see and appreciate the wine. With emoji, I lend myself more to the “transparent crystal goblet” philosophy. 

Why should we care about how our emoji are designed?

My understanding is that 80% of communication is nonverbal. There’s a parallel in how we communicate. We text how we talk. It’s informal, it’s loose. You’re pausing to take a breath. Emoji are shared alongside words.

When emoji first came around, we had the misconception that they were ruining language. Learning a new language is really hard, and emoji is kind of like a new language. It works with how you already communicate. It evolves as you evolve. How you communicate and present yourself evolves, just like yourself. You can look at the nearly 3,000 emoji and it [their interpretation] changes by age or gender or geographic area. When we talk to someone and are making eye contact, you shift your body language, and that’s an emotional contagion. It builds empathy and connection. It gives you permission to reveal that about yourself. Emoji can do that, all in an image.

Continue Reading

Tech

Product design gets an AI makeover

Published

on

Product design gets an AI makeover


It’s a tall order, but one that Zapf says artificial intelligence (AI) technology can support by capturing the right data and guiding engineers through product design and development.

No wonder a November 2020 McKinsey survey reveals that more than half of organizations have adopted AI in at least one function, and 22% of respondents report at least 5% of their companywide earnings are attributable to AI. And in manufacturing, 71% of respondents have seen a 5% or more increase in revenue with AI adoption.

But that wasn’t always the case. Once “rarely used in product development,” AI has experienced an evolution over the past few years, Zapf says. Today, tech giants known for their innovations in AI, such as Google, IBM, and Amazon, “have set new standards for the use of AI in other processes,” such as engineering.

“AI is a promising and exploratory area that can significantly improve user experience for designing engineers, as well as gather relevant data in the development process for specific applications,” says Katrien Wyckaert, director of industry solutions for Siemens Industry Software.

The result is a growing appreciation for a technology that promises to simplify complex systems, get products to market faster, and drive product innovation.

Simplifying complex systems

A perfect example of AI’s power to overhaul product development is Renault. In response to increasing consumer demand, the French automaker is equipping a growing number of new vehicle models with an automated manual transmission (AMT)—a system that behaves like an automatic transmission but allows drivers to shift gears electronically using a push-button command.

AMTs are popular among consumers, but designing them can present formidable challenges. That’s because an AMT’s performance depends on the operation of three distinct subsystems: an electro-mechanical actuator that shifts the gears, electronic sensors that monitor vehicle status, and software embedded in the transmission control unit, which controls the engine. Because of this complexity, it can take up to a year of extensive trial and error to define the system’s functional requirements, design the actuator mechanics, develop the necessary software, and validate the overall system.

In an effort to streamline its AMT development process, Renault turned to Simcenter Amesim software from Siemens Digital Industries Software. The simulation technology relies on artificial neural networks, AI “learning” systems loosely modeled on the human brain. Engineers simply drag, drop, and connect icons to graphically create a model. When displayed on a screen as a sketch, the model illustrates the relationship between all the various elements of an AMT system. In turn, engineers can predict the behavior and performance of the AMT and make any necessary refinements early in the development cycle, avoiding late-stage problems and delays. In fact, by using a virtual engine and transmissions as stand-ins while developing hardware, Renault has managed to cut its AMT development time almost in half.

Speed without sacrificing quality

So, too, are emerging environmental standards prompting Renault to rely more heavily on AI. To comply with emerging carbon dioxide emissions standards, Renault has been working on the design and development of hybrid vehicles. But hybrid engines are far more complex to develop than those found in vehicles with a single energy source, such as a conventional car. That’s because hybrid engines require engineers to perform complex feats like balancing the power required from multiple energy sources, choosing from a multitude of architectures, and examining the impact of transmissions and cooling systems on a vehicle’s energy performance.

“To meet new environmental standards for a hybrid engine, we must completely rethink the architecture of gasoline engines,” says Vincent Talon, head of simulation at Renault. The problem, he adds, is that carefully examining “the dozens of different actuators that can influence the final results of fuel consumption and pollutant emissions” is a lengthy and complex process, made by more difficult by rigid timelines.

“Today, we clearly don’t have the time to painstakingly evaluate various hybrid powertrain architectures,” says Talon. “Rather, we needed to use an advanced methodology to manage this new complexity.”

For more on AI in industrial applications, visit www.siemens.com/artificialintelligence.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading

Tech

AI consumes a lot of energy. Hackers could make it consume more.

Published

on

AI consumes a lot of energy. Hackers could make it consume more.


The attack: But this kind of neural network means if you change the input, such as the image it’s fed, you can change how much computation it needs to solve it. This opens up a vulnerability that hackers could exploit, as the researchers from the Maryland Cybersecurity Center outlined in a new paper being presented at the International Conference on Learning Representations this week. By adding small amounts of noise to a network’s inputs, they made it perceive the inputs as more difficult and jack up its computation. 

When they assumed the attacker had full information about the neural network, they were able to max out its energy draw. When they assumed the attacker had limited to no information, they were still able to slow down the network’s processing and increase energy usage by 20% to 80%. The reason, as the researchers found, is that the attacks transfer well across different types of neural networks. Designing an attack for one image classification system is enough to disrupt many, says Yiğitcan Kaya, a PhD student and paper coauthor.

The caveat: This kind of attack is still somewhat theoretical. Input-adaptive architectures aren’t yet commonly used in real-world applications. But the researchers believe this will quickly change from the pressures within the industry to deploy lighter weight neural networks, such as for smart home and other IoT devices. Tudor Dumitraş, the professor who advised the research, says more work is needed to understand the extent to which this kind of threat could create damage. But, he adds, this paper is a first step to raising awareness: “What’s important to me is to bring to people’s attention the fact that this is a new threat model, and these kinds of attacks can be done.”

Continue Reading

Copyright © 2020 Diliput News.