Changing Design blog post represented by a painted blue citrus on a blue background. Only the orange is what you'd expect.

How AI Is Changing Design and Research

The HeyMarvin founders share practical advice on how to incorporate AI into your UX research workflow.

12 mins read

AI is shaking up the design and research industries. Advancements are increasing pressure from leaders to build products faster.

So how can designers and researchers incorporate AI into their design process?

Marvin’s founders Prayag and Chirag Narula are here to help!

The two reflected on how much has changed for AI in research during the past six months. They also shared their thoughts on how to future-proof your design practice with AI.

Want more than just the highlights? Check out “How AI Will Change Your UX Job for the Better.

Are AI Chat Interfaces Only the Beginning?

Since ChatGPT released, products have been constantly rolling out AI features and functionality. With one peculiar common thread. Everyone seems to have taken the same page out of OpenAI’s book.

AI engines today have become synonymous with conversational or chat interfaces.

Which means you have to constantly engage with the machine or computer to get the output you desire.

Chirag attributes this widespread adoption of talking to a computer to pop culture: “We’ve seen it since the Star Wars and Star Trek days.”

At Marvin, we’ve chosen a slightly different approach. Marvin’s Ask AI uses a Google-esque search bar that searches and analyzes data across your entire repository.

Chirag confirmed that this is by design. “Conversational UI with a chat interface is a lazy solution,” he said. Why?

A Designer’s Example of How AI Is Changing Design

We’re human. Human beings don’t always have coherent thoughts (some of us never do!). It’s tough enough to converse with another human. Let alone a machine. And for most, it’s extremely difficult to convert their thoughts into words.

“How do you communicate a shape to someone? I’d rather draw than talk about it,” he said. 

Confronted with a blank canvas, professionals can struggle to get their work going.

“The biggest writer’s block is an empty page. When you have the ultimate freedom to ask anything, you don’t have any freedom at all. There’s so many things you can do with it,” Chirag said. 

He thinks we’re at an early stage of interacting with AI. There’s more to come. 

Chirag drew comparisons with AI’s current state to early generations of computers. Back then, engineers had to interact with the terminal using a command line interface. The Graphical User Interface (GUI) entered the fray and changed everything. It brought a user-friendly approach to computing. 

[Bill Gates echoes these sentiments. In this article, Bill describes how AI is as revolutionary as GUIs.]

Chirag sees an opportunity to build an interface layer on top of chat. One that’s easier to interact with. One that understands exactly what customers want from an AI model.

Communication Is Key for Designers

Until that day comes, we’re stuck with chat interfaces(for now).

Therefore, embedding AI into your workflow requires knowledge of writing a good prompt. Chirag alluded to how becoming a prompt engineer has become a lucrative career path.

Do design and research teams need to carve out roles for prompt engineers?

Chirag doesn’t think that they have a place in the UX function. “You don’t want to rely on a prompt engineer to give you a part of the design,” he said. 

That’s a researcher or designer’s domain. 

Besides, he thinks that capability already exists in the team: 

“I believe that any designer or user researcher is capable of becoming this ‘prompt engineer,’” he said.

Using AI during the design process is replacing a step that a human would ordinarily carry out. You’re turning numerous steps into a prompt. Chirag believes that the onus is on designers to write these prompts themselves.

“It will, or has already become a primary job requirement of every designer and user researcher,” he said.

Prayag thinks that shouldn’t be a problem for designers and researchers. It draws on the essence of design:

“At its core, being a designer is being able to communicate effectively with users,” he said.

The entire product profession relies on communication. Whether it’s via user interfaces, or text in an application, or a product’s documentation. It’s all about communicating with users during the journey we want them to have.

How AI Augments the Design & Research Process

Chirag shared advice for researchers and designers trying to upskill and stay current with AI trends:

“You should already be using ChatGPT (and similar tools) to get a lot of stuff done. It’s super helpful and really elevates your work,” he said.

How do researchers and designers use widely available AI to expedite parts of the research process?

Know Your Goal

What are you looking to discover? To begin with, you need to have a destination in mind. Chirag outlined a list of questions to get you started:

  • What is the project’s goal? 
  • What are you trying to achieve?
  • How are you going to achieve that?
  • What is your hypothesis?
  • What is the best way to reach this goal?
  • What methodologies does this require?
  • How many users do I need for my research?

Become really good at identifying what you’re looking to get out of a project. 

Perfecting the Prompt

Over to structuring the perfect prompt, where we introduced readers to the CLEAR framework and shared a few additional resources to level up your prompt game.

Since then, Chirag has been experimenting extensively with prompts. 

“After playing with AI for so long (every day), I realized it’s like talking to a very smart baby,” he said.

He elaborated, “A very smart baby who has all the knowledge of the world. But you have to be smarter in (that) you have to know where to go,” he said.

As you direct the prompts, you have to know what the end result looks like and persuade AI to get you there. Chirag recommends writing step-by-step prompts that set the context and the rules. 

He warns everyone to prepare themselves for a lot of trial and error. It’s important to point this out to the AI engine with commands like “You did ___ in the past. For the next step, don’t do that.”

Your prompt structure should look something like this:

  1. Context
  2. Rules
  3. Steps
  4. DON’Ts

CAUTION: Be wary. Chirag has seen people use the output of a previous step as input for the next step. He advises against feeding AI results to a subsequent AI prompt as it can skew results.

Brainstorm and Refine Studies with AI

Now that you’ve understood how to (loosely) structure your prompts, use AI as the ultimate UX assistant. AI can wear many hats — it’s your thinking partner and quality control all rolled into one. 

To begin, Chirag recommends a role-playing exercise with AI. This tells the AI who it must act like. Who are they working for? He provided an example that clarifies how to establish context in your prompts:

“You are a UX research assistant working for a metallurgical company, trying to achieve ‘XYZ’. You will be talking to five kids who want to learn about metallurgy.”

Context statements like this help establish AI’s role. Once AI knows what it’s geared towards, use it as a soundboard for new ideas.

Ask AI any question just as you would with a UX assistant. Use AI to:

  • Generate Ideas. “Give me five questions on XYZ within metallurgy”
  • Iterate Continuously. “I don’t like these questions. Give me five more”
  • Refine. “These questions appear too friendly. Word them differently.”

Researchers spend HOURS trying to frame a question correctly. Designers spend multiple hours on phrasing their call-to-action or titles and subtitles. 

This is where AI comes in. Refine your studies with its help. 

Once AI produces enough questions for your survey or interview, continue to refine them. “These are the questions that I’m planning to ask…”:

  • “Is there something missing or lacking in these questions?”
  • “Make these questions clearer / funnier / more serious.” (depending on the target audience)
  • “Check if any of these questions are ‘leading questions’.”

AI is a great thinking and UX sparring partner. Make it your research sidekick today.

Analyze Data with AI

The buck doesn’t stop there. Use AI during your data analysis — before you begin a study, or post data collection.

At the outset, Chirag warned, “the smaller the context, the smaller the data that you’re performing the AI task on, the better your analysis gets,” he said. 

Prayag has seen this with AI chat-based interfaces. 

“As the context gets bigger, the AI might forget things and make up a lot more stuff,” he said. 

Increasing the context (and data), means that the quality of results will likely deteriorate. It’s a delicate balancing act to establish the right context. 

Smaller, concentrated context can lead to mind-blowing results.

Lit Reviews with AI

Prayag harked back to his grad school days, where Literature Reviews was a sizable chunk of his research. He’d spend hours on end to conduct Lit Reviews for any paper he was writing.

Once the scourge of researchers, Lit Reviews are now a breeze with AI.

Chirag will upload several 50+ page documents and ask, “What are the key takeaways? Explain them to me like I’m a 5-year-old.” 

The onus is on the researcher to identify what research papers to feed the AI. Chirag points out that Perplexity is an example of an AI-engine that can suggest information sources. However, he still relies on his own instinct as a researcher while collecting papers, books and research material. 

He then lets AI summarize the results. 

Summarizing with AI

Generating a summary from a lengthy document (or information source) has become commonplace for AI tools.

Chirag walked us through the process: It’s not as straightforward as “here’s a transcript, give me a summary.”

“You can get a summary in one second, but it can also take days and days of (engineering) work to get a good summary,” he said. 

That’s where the UI comes in. The eventual goal is for a reader to read the summary. However, as a designer you want to create the best experience for the end user. Essentially, Chirag and his team had to design the perfect AI prompt in order to create a better summary. The ingredients for this summary? They considered the participants, context, subject matter and key takeaways.

“You CANNOT leave AI guessing. What’s the ideal summary for you?” Chirag asked. 

Do More with AI

AI can perform trend and thematic analysis on documents too. When working with multiple transcripts or documents, Chirag recommends tackling this in a step-by-step manner (as if you’re talking to the smart baby, remember?).

Want to learn more about how AI can help with your analysis? Check out our series on UX Analytics with AI.

How to Improve Trust in AI Output

There’s a problem with AI. One we haven’t addressed. Yet. 

When users prompt AI for an output, they’re not privy to its inner workings. AI is constantly referred to as a ‘black box’. A world unknown. 

“You don’t know where the data is coming from. How is it calculating anything?” said Chirag. Throughout the chat, he spoke extensively about establishing a trust factor. 

How can we improve trust in AI’s output? How does Marvin do it?

“Trust goes hand-in-hand with transparency,” Chirag said. 

He cited an example that illustrates the difference in trust between using an AI engine versus conventional search (i.e., Google). 

When you Google something, you can trace a result to the exact source. If you have a health concern, you’d trust information on a government website. You wouldn’t trust information on a subreddit page (or you might, if the source is credible). This makes traditional search transparent

With current iterations of AI, we get an answer, but we’re not sure how AI got there. There’s low inherent trust in AI. 

“It’s our job as product builders to build or elevate that trust,” Chirag said.

It’s forced him and his team to consider…

What does the new age of search look like? 

One that’s transparent and where you can trust the results. 

Marvin now offers path traceability through added citations for all AI answers. This gives users the capacity to vet their sources one by one and look at the underlying data. Then it’s on the user to decide whether they want to go with AI’s recommendation or not. 

Prayag pointed out that another important consideration is coverage:

  • How much data has the AI engine checked? 
  • Has the model looked at every single part of the dataset? 
  • Is it providing the most complete answer from 1000s of TBs of data?
  • Is AI’s answer the best representation of the data?

Sometimes, when you run the same command twice, AI spit out two different answers. Where’s the consistency?

More transparency, please. 

Popular AI Questions Answered

Prayag and Chirag answered the most frequently asked questions they hear about AI in design and research.

They shared their views on how AI is changing design and research:

Q1: Can AI help sustain a qualitative dashboard? One that analyzes ongoing research and updates in accordance?

Chirag: It is possible (and) something we’re looking into ourselves. If you’re making a dashboard, what does that entail? It’s not easy to create a dashboard of everything that’s going on in studies. What will go into this dashboard? What’s the end goal here?

Prayag: Beware of the pitfalls of using a large language model. Consider the trust factor. Can AI cite everything? Has it looked at everything or just the last interview?

Chirag: People might ask ‘has AI looked at all the data?’ or ‘how does it work?’. Build and include some sort of transparency factor in your dashboard.

Q2: Are there any ethical considerations or potential biases that UX designers should be mindful of when integrating AI into their research or designs?

Prayag: One of the things that AI has done that is good for the research industry is to (create) a conversation about bias. No study or research output is without bias, whether it’s qualitative or quantitative. At best what we can do is acknowledge our biases and try to minimize them. You cannot completely remove the bias. I am a big fan of acknowledging the bias, front and center. That bias might exist in your data itself. As you build a conclusion, look at counterfactuals. Make sure you look at the part of the story that doesn’t add up and (present) that. If you use AI, there’s some bias included in it. So acknowledge it, make it clear. Transparency greets trust, like Chirag said.

Q2.5: Can you give examples of acknowledging and communicating one’s bias?

Prayag: Most people don’t do this enough. It’s a lot more common in academia – it’s expected. When we were writing a research paper, we’d have a ‘future work’ section in the end. We’d talk about where we could have gone wrong or the things that we should have done. Or here’s the bias we could have introduced to the dataset. Or what changes we would have made if we were to do this experiment or research again. A lot of times your reviewers call you out if you don’t acknowledge it. That’s usually the reason to do it. Academia acknowledges bias really well, but we should start doing it in the industry too. 

Q3: How might AI end the need of GUI if there is no longer an Internet destination. AI delivers back to us a bespoke experience. How does that radically change the role of a UX professional? 

Chirag: I think it is highly possible. I don’t think conversational UI is the end goal. If we bring conversational UI on top of that, and think it’d solve all our problems…that’s not going to happen. We’ll have to bring the best of both worlds – what GUI has provided and overlap it with AI. That’s going to be a magical interface for at least the foreseeable future. I think they’ve given us a lot and there’s still people who are still trying to make sense of the GUI in itself. There are still so many bad things about the GUI, but a lot of it works well also. Overlap with well done prompt engineering and use AI as a proper tool? A lot of things can be done using that.

Prayag: I agree. I think UI is something magical. It lets you do things without knowing exactly what you want to do. The drag and drop interface (for example). It lets you turn your thought into action without having to verbalize it. I think GUIs will definitely still exist. But smart GUI (with) smart interfaces driven by AI – that’s really exciting.

Q4: When we are designing, conducting and analyzing research using AI as a thought partner, how do we know if we are accurately capturing a culturally holistic output?

Chirag: If you’re doing research which is culturally diverse, be very cautious about it. While AI has been trained on a lot of questions, it’s also not trained that well on cultural values. Maybe things are going to be very different one or two years from now. But for the time being, be wary. 

Prayag: We did an amazing interview with Rida Qadri, a researcher at Google. Rida grew up in Pakistan. She did research on gig workers in the global south. Even though she’s from the global south, she had to unlearn a lot of vested values that she’d learnt in San Francisco. It’s not just an AI thing, it’s a human thing. As humans we need to be aware of when we bring our own biases, and cultural norms into our research. That’s hard. It comes with experience. I don’t think AI can replace that, honestly. It’d be a good exercise to give AI an interview guide for the US and say ‘I want to take that research to India. What are the changes that I need to make?’. I would like to see AI’s answer to that, but I think you’re better off thinking about it and asking a human.

User Research Software Marvin is a Game-Changer

Hero photo by davisuko on Unsplash

withemes on instagram