AI in UX Research: Technology and humans working together

AI in UX Research: The Good, the Bad and the Final Verdict

Ready to add AI to your UX research toolbox? Make sure you're considering these benefits and potential pitfalls.

10 mins read

Is AI the existential threat we think it is, set to replace all human UX researchers? Or can designers and researchers harness and incorporate its mind-boggling capabilities into their work? Read on to find out the risks and benefits of AI in UX research.

In this article, you’ll learn:

  • Capabilities that AI brings to the research table
  • Drawbacks of using generative AI in UX research
  • Key factors to consider while developing any new technology
  • Advice from experts at Google and Microsoft about using AI in research

The Good: Benefits of AI in UX Research

Following its $10 billion investment into OpenAI*, Microsoft announced an upcoming AI co-pilot soon to be deployed in Microsoft’s Office suite. Co-pilot boasts the ability to quickly analyze data, generate reports and create presentations (among many). Working professionals (including yours truly) were left in limbo, worrying for their long term job prospects.

Sounds too close to home? Let us alleviate your fears.

Microsoft called it “co-pilot” implying someone else is flying the plane: You.

AI may be ground-breaking technology, but at the end of the day, it’s a technology that exists to serve humans. As directors of technology, it becomes increasingly important for us to understand how to effectively use AI during analysis, and for what. Learn how AI can become your ultimate UX research sidekick (or your copilot or the Robin to your Batman or… you get the idea).

Best User Research Software Banner Ad

Optimize UX Workflow

Qualitative research provides a wholesome understanding of phenomenon through rich insights. Arriving at those insights involves a tedious, cumbersome data annotation process called coding or tagging.

Traditionally, UX researchers toiled away, spending hours transcribing and tagging interview data. Not anymore. Research tools like Marvin use analytical AI to create transcripts of virtually any interview recording you throw at them. By automating these mechanical tasks, it frees up a researcher’s time to focus wholly on the participant and conduct deeper analysis. Marvin users spend 60% less time analyzing UX research data. Set your research up for success — explore our guide on how to use tags to reach insights faster.

One thing that’s indisputable is that AI is bound to make workers more efficient. With AI in their toolkit, UX researchers and product designers will spend less time wrestling with tricky and convoluted data, and more time on analysis.

Enable Large-Scale Analysis with AI in UX Research

Improved efficiency unlocks the door to conduct data analysis en masse. Deconstructing an interview transcript is a complex process. Researchers can suffer from the dreaded data overload — there’s simply too much to unpack from unstructured data from individual interviews and group sessions, verbal and non-verbal cues and overlapping themes.

AI facilitates the analysis of vast amounts of data — sit back and let it pore through mountains of historical data. It has the ability to predict possible customer behavior and evaluate how they interact with designs. The amount of predictive analysis AI is capable of was not physically or mentally possible before the introduction of AI (not by a human, anyway!). 

Start your analysis on the right foot. Marvin uses generative AI to create synopses of interviews, condensing key points from hours of interview time into a paragraph or two. Far preferable to get a quick gist than reading a twenty page long transcript. Marvin’s AI generates auto-notes for your recordings. A solid structure and foundation lays the path for deeper analysis. By delegating the heavy lifting to AI, researchers can do more with all this newfound time on their hands.

[DISCLAIMER: We encourage you not to think of the output as the final product, but a starting point for your analysis.]

Deliver Consistency & Reliability

What machines do better than humans, is follow instructions to the tee. Researchers can define an interpretive grid (a coding scheme) and set AI models to work, so that they perform some initial heavy lifting. Any failure cases based on the algorithm can then be further explicated. 

Bias is inherent in every study — unfortunately that’s attributable to the researcher. AI can be trained and corrected to interpret and eradicate bias. This makes studies more equitable across the board. 

Privacy is a fundamental right. Protecting user privacy is of utmost importance while conducting user research. We obsess over protecting participants’ Personal and Identifiable Information (PII) such as their names, contact details, gender and occupation. We teamed up with Assembly AI to pioneer a first-of-its-kind PII redaction model that automatically strips away participant PII from audio and video files. Powered with AI; more peace of mind for everyone. 

Uncover New Patterns with AI in UX Research

Analytical AI has the capacity to unearth unexpected and interesting insights. Given the right prompt, it can detect patterns and themes in textual data, and generate insights that even researchers may have missed. 

AI levels the playing field enabling multilingual analysis and promoting cultural diversity in research. With text mining & natural language processing (NLP), research tools like Marvin can translate and transcribe numerous languages. This breaks down barriers to communication, overcoming language limitations, form, styles and varying academic conventions.

With AI, a refreshing and novel idea may be just round the corner.

Facilitate Collaboration

AI originally found itself catering to fields of computer science and engineering, but now permeates through various disciplines. It now transcends healthcare, business and finance, psychology and neuroscience. 

At Marvin, we wanted a platform to centralize all user insights, so they all live in one place. No more duplicative efforts, harness the power of a centralized research repository. We also want users across disciplines to continue with existing tools that they already use. Share playlists, clips and insights with your peers. Whether they’re researchers or not – everyone benefits from first-hand feedback from end users.

LiveNotes, our collaborative note taking tool, enables you to create time-stamped insights, so you can quickly bookmark and annotate important parts of an interview while conducting it. Your findings are always accessible and editable upon later viewing — synthesize them live or post with colleagues. Integrate your video conferencing platform of choice (Zoom, Meet, Teams) and document your observations with your peers.

Our core values ring true: Elevate the user voice across your organization. Create a customer-centric culture.

The Bad: Risks of AI in UX Research

On the road to implementing new tech, there’s bound to be rough and bumpy bits along the way. AI will only get better with each iteration — Chat GPT-4 is far more advanced and capable than its predecessors.

Call them usability enhancements or bug fixes, the fact is we learn more about the nuances of a particular application with time as we use it. It’s vital to understand the limitations of AI in UX research work — drawing the line under what it can help you with versus what it can’t.

Baked-in Bias

Bias is a double-edged sword. We’re all biased, whether we care to admit it or not. Above, we looked at how AI can reduce human bias by automating certain procedures. Consider this — any AI model is coded by a developer. Developers may unknowingly and inadvertently bake in bias into their model, resulting in skewed analysis and reinforcing their own existing social prejudices, stereotypes or inequalities. 

Plenty of recent examples illustrate how big tech companies failed to detect inherent bias in their systems. Companies such as Amazon, Microsoft and Google have been in the news for the wrong reasons — their AI algorithms unknowingly exhibited racial and gender bias. 

Google Research Scientist Rida Qadri recently conducted a study on the (poor) representation of South Asian cultures in text-to-image AI models. Rida suggests that since these models are based on the internet as a collective archive of information, they are largely representative of the western world. As a result, South Asian cultures are often poorly represented, if at all. 

There are serious ethical considerations to take into account when developing new AI technologies. Researchers must delve deeper into AI output and seek to identify and correct any biases. Award-winning researcher Mary Gray questions whether we are failing certain groups in society. She touts qualitative research as key to building more ethically responsible AI

Less Context

How does AI perform in providing context in your qualitative insights?

Check out these two studies:

  1. AI versus human researchers conducting qualitative analysis
  2. Deloitte’s European Workforce survey

Here’s a quick tl;dr recap:

Actual human researchers faced off with a machine to unearth qualitative insights from an open-ended question. While the machine took considerably less time to conduct its analysis, the human analysis was more in-depth and comprehensive. The machine’s categorizations were simplistic and unhelpful. In a sentiment analysis, machines missed the crux of participant responses by isolating each word individually, failing to group them into coherent themes at all. 

As humans, the researchers had an understanding of the question and the reasoning behind it. While classifying responses, they can be relied on to provide the right context, group phrases together and tag data intuitively.

We understand other humans — what their responses mean and what they are alluding to. That’s what separates man from machine. 

Lost Human Touch

“AI is limited by its inability to possess human-level understanding of the social world” – Hubert. L Dreyfus, 1992.

There’s no getting around it — the responsibility of creating a study and all conclusions drawn from it will always rest with human researchers. No matter how comprehensive results or insights are, AI cannot be accorded with any ownership or authorship of research. AI has no common sense, an inability to learn from experience and a lack of understanding of social and cultural nuances

Qualitative research relies a lot on forming inferences from an interaction. When you interview a participant, you choose how to navigate the interview, forming new questions and interpretations along the way. 

AI lacks that human touch — the intuition and experience that a lifetime of interactions has given us. We spoke earlier about giving interviewees your full attention – AI wouldn’t pick up on subtleties such as a change in facial expression or a shift in the tone of voice.

Humans can gauge sentiment. Fidelity’s VP of Design Ben Little spoke to us about how craftsmanship is making a comeback. A techno-optimist, Ben says that the mass adoption of AI will only increase the novelty of human-crafted design. There are some things you can’t replace; AI just doesn’t have lived experience, imagination and empathy — three inherent human traits.

Noisy Data

Ever heard of GIGO? A computer science wordplay on the concept of the first-in first-out (FIFO) inventory accounting method (yawn), it stands for “Garbage in, garbage out.”

Quite simply, what you put into something, you get out of it. We’re not in the business of doling out life lessons, but the same logic applies to any AI model. The quality of output is dependent on the quality of input. Put garbage in, and don’t be surprised when garbage comes out. 

Ask any data analyst, product designer or researcher — the bulk of their time is spent cleaning and sanitizing data. Output from AI models are dependent on pre-processed, accurate data. AI struggles to deal with inconsistent or ambiguous data. 

Mary talked passionately about ensuring models have a complete and comprehensive dataset to begin with. Rida explained that this isn’t so easy. It’s hard to shake off the indelible Western influence over technology. Since AI uses the web as the foundational library of all information, it’s important to remember that it’s not all-encompassing.

Continually ask yourself — are we training AI models on the right dataset?

Overreliance on Technology

Should you ask any youngster today a general knowledge or simple math question, watch them consult their smartphones before their brain. Spend a day at work without your phone and observe how disconnected and naked you feel. Surprise, surprise, we’re all a bit too reliant on technology.

This cuts the same way for UX researchers. If AI is doing all the work for you, your neurons aren’t likely firing on all cylinders. Going through the motions, researchers can miss insights, lose interest in the automated tasks which diminishes their critical thinking and analytical skills. 

Mechanization culls creativity. Design is an artistic field, a realm of imagination and innovation. Leaning too heavily on tech can make us lazy, complacent and boring. Does churning out something that’s been done to death work?

Likely not.

The Final Verdict of AI in UX Research

[Admittedly, we dropped the ball with the title of this section. We wanted to replicate the Clint Eastwood western classic, but we couldn’t paint the future as ‘ugly’. Read on to find out why.]

What does the future hold for UX researchers and AI? We’re quite bullish on AI in research. Our two cents (literally) is two factors to constantly keep in mind as you build and introduce new products into the world:

Consider the Community Impact

Technology has a transformational impact on society. Computers used to occupy the size of a room, but now sit in our pockets. With a few swipes, you can have groceries delivered home, learn a new language or hail a taxi. 

Rida Qadri examined how technology is continually shaped by social contexts. She spent time in Jakarta examining mobility platforms and how they adapted to the existing mobility landscape. Uber revolutionized travel and mobility in the West, but their approach doesn’t translate across the globe. As designers, we don’t get to tell users how to use our technology. Noone can predict user behavior.

We may have the best intentions in releasing groundbreaking AI technologies. However, we would be doing communities and wider humanity a disservice if we don’t spend a substantial amount of time trying to understand the impact AI will have on these communities.

Try understanding the impact these technologies have not on the immediate, but wider communities, even ones you might not be examining today.

Maintain User-Centric Design

Throughout the product design process, the most important questions designers must ask themselves are:

  1. Why are we building this?
  2. What problem(s) does it solve for our customers?
  3. What are any potential challenges we may encounter?

This is the crux of establishing user empathy. Never lose sight of these core questions, constantly circle back to them. 

And remember, a customer-centric culture is not something that AI can build for you.

In a Nutshell: AI in UX Research Will Make Us Better Researchers

There’s no reason why UX professionals and AI can’t enjoy a peaceful co-existence. AI helps facilitate rapid and thorough analysis, giving designers and researchers the gift of time. It comes with its fair share of limitations, which must be understood fully while using it. As with any nascent technology, plenty of kinks to be ironed out.

Central to leveraging AI will always be the human behind it. Technology is here to serve humans, not the other way around. The rise of AI is not necessarily about replacing researchers and designers, but empowering them to become more productive. Think of AI less as a threat and more of an apparatus in the UX toolkit, one that supercharges your productivity.

*Quick note that Sam Altman is one of Marvin’s lead investors. We’re big fans of his work and grateful for his support of our user research platform!

withemes on instagram