Responsible AI blog post on HeyMarvin - powerful woman silhouette

Why UX Designers are the Driving Force Behind Responsible AI

Learn why AI experts at Microsoft think UX is at the heart of a more robust and responsible AI environment.

9 mins read

Groundbreaking AI technology has unbelievable power and potential — yet numerous companies have been under the cosh for inherent bias and misrepresentation in their AI systems. Rightfully so: Companies releasing new tech must be cognizant of the pitfalls and their obligation to introducing responsible AI to society.

With great power comes great responsibility. 

Yes, we just quoted Spiderman (again). Microsoft’s Mihaela Vorvoreanu, PhD (aka Mickey) employs this ethos constantly in her work. 

Mickey is Director of UX Research & Responsible AI Education at Aether, an initiative for AI Ethics and Effects in Engineering and Research. She and her colleagues were among the first to produce thought leadership and guidance on the practice of responsible AI.

Her view?

Responsible AI is a people problem. It can’t be solved with technology alone.

Enter UX.

In a fascinating and insightful conversation, Mickey sat down with Marvin to talk about the road to responsible AI. Read on to learn about:

  • What is responsible AI?
  • The road to responsible AI
  • How UX and responsible AI are intertwined
  • Self governance for responsible AI

What is Responsible AI?

Mickey defines responsible AI rather simply, “It’s about repeatedly asking the question(s) — what could go wrong? Who can be harmed? How?” she said.

Mary Gray shed light on how AI was failing certain members of society. Mickey reminded us that it’s not just direct users — harm resulting from AI can apply to bystanders, other stakeholders and society at large. 

It’s because of the potential of wide-reaching harm, that society has begun to demand change and accountability in the development of AI technologies. It’s forced the industry to reflect on how we build AI.

So how do we build Al in a responsible, human-centered way? Microsoft outlined three best practices for responsible AI:

  1. Users – consider all users and the variety of contexts in which they use AI. Tie all technical decisions back to user needs. 
  2. Diversity – involve diverse perspectives early and throughout development. Talk to diverse sets of potential users, but also involve various team members throughout the development process to decide on a system’s functionality.
  3. Failure – no machine has a 100% efficiency — failures are inevitable. Plan for failures so users can recover when things break down or go wrong.

Circling back to these three principles help drive human-centric decisions when building AI-based technology. Rida Qadri examines how mobility apps became more user-centric in the Global South.

The Road to Responsible AI

A burgeoning and evolving field, AI has erupted with an abundance of ideas, research and tools. Mickey describes the current state of affairs with regards to using tools in responsible AI:

“So far, we’re throwing the toolbox on the table and we’re surprised that people are eating blueberries with a screwdriver. It’s really hard to know what to do, when and what tool to pick, when,” she said.

Clearly, there’s a lack of guidance on how to properly use tools at your disposal. That’s where Mickey and her colleagues came in. Before the explosion of AI, they identified the need for organized guidelines for companies beginning their journey towards responsible AI. She walked us through two frameworks that her team has created at Microsoft that inform the appropriate and responsible development and use of AI:

Human AI Experiences Toolkit

We’re a long way off building technologies that operate without human intervention. Building responsible AI systems will always require human judgment and decision making.

Aimed at people designing and developing experiences with AI, Aether developed the Human AI Experiences (HAX) Toolkit, a concept akin to HCI (human computer interaction). HAX consists of a set of tools that help professionals implement human-centered practices for creating responsible AI systems.

Mickey and her colleagues took a systematic and measured research approach, co-developing the framework with over 40 practitioners across the company over four years. 

A major theme throughout the HAX documentation is that one discipline cannot go the distance alone. Designing user experiences requires interdisciplinary collaboration. 

Mickey has no doubt about who will be crucial on the journey, “We absolutely need UX if we want to build effective user experiences with AI.” She illustrates with an example:

Effectively communicating the uses and limitations of a system is not a task data scientists and engineers relish – they aren’t exactly trained in the art. The solution? Involve UX professionals in communication.

“People are going to have experiences with AI. If we don’t engage the people who specialize in optimizing people’s experiences with AI, (do) we want to leave that up to chance?” she asked.

Standing alone, UX can’t communicate effectively by itself. 

“You need information about the system – error rates, confidence rates, accuracy, the types of mistakes it’s going to make. You need to translate that from data science into an interface that people can relate to without overwhelming them with numbers and statistics,” she said.

Assembling a diverse set of skills, and collaborating with experts is crucial in developing human-centric AI technologies.

Learn more about Microsoft’s (HAX) Toolkit.

Responsible AI Maturity Model

The pièce de résistance is a project that took Mickey and her colleagues over two years, 90 participants and hundreds of hours decoding complex interview data (using Marvin, we’re proud to say!). 

The Responsible AI Maturity Model (RAI MM) is a framework to help companies understand and advance their journey towards responsible AI. It leans on Microsoft’s responsible AI pillars of Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency and Accountability. 

Over the course of the study, Mickey was surprised by some responses they got in discussion with interviewees:

Take Fairness. When asked about what constitutes fairness and how it’s practiced in a mature way, some data scientists’ responded that it doesn’t begin with fairness. They wanted to talk about culture, collaboration and leadership buy-in.

“The beauty of qualitative research is that you often get answers to questions you didn’t even ask,” Mickey said.

With this rich input, Mickey and team ended up with three different levels of maturity – a pyramid that helps firms chart their path to responsible AI:

Source: Microsoft

Each of these pillars is broken down into tangible, measurable items. Adopting each principle makes a company’s AI a little more concrete. Mickey describes the maturity model as both descriptive and prescriptive. Identifying your company’s existing level of responsible AI maturity involves identifying with a description of a stage outlined in the model. Leveling up is more prescriptive – this requires you to look into the future at what the next level of maturity looks like.

Getting Leadership Onside

The HAX toolkit was developed with designers and people building AI technologies in mind. The audience for the responsible AI maturity model on the other hand, consists of management and leadership. 

Mickey was adamant that this framework speak management’s language. Why was that important?

“You cannot have a sustained, integrated and mature responsible AI practice if it’s grassroots only. You have to have leadership buy-in — not only declaring that responsible AI is important, but actually incentivizing it with resources,” said Mickey.

Companies must have validation from the top-down when implementing responsible AI practices. So how do you convince management to actively champion responsible AI?

Mickey believes this is easier today than ever before. She points to the extensive coverage of how errant AI has failed members of society, and companies suffering reputational harm as a result. 

“Do you want to be an article in The New York Times? People watching the news see that this is actually important, that it might cause reputational damage and showcases real harms that could happen to people in society,” she said. 

UX and Responsible AI

Why Responsible AI needs UX Research

Mickey describes the normal chain of events when companies begin introducing novel technologies:

“Data scientists get all excited, start building something and then realize, ‘oh, I don’t know what shade of blue to pick. Let’s call in a UX person.’ That tells you several things are wrong. First of all, you didn’t engage them early. Second, you don’t really understand what UX can do,” she said. 

This brings up something we touched on earlier — cross-disciplinary collaboration. Assembling a diverse set of experts and perspectives from early on contributes significantly to a responsible AI system. 

According to Mickey, responsible AI is a socio-technical problem. It cannot be solved by data scientists or UX alone. Let’s borrow the example of defining fairness from above:

“You need user research to understand (in) your product’s context, what does fairness mean? Who are the parties who are negotiating fairness, and what does fairness mean to them? And then work with your data scientists to shift that into a number,” said Mickey. 

UX is necessary, then, but not always sufficient. To tackle the “socio” side of the equation, companies may need help from more experts such as anthropologists, sociologists or ethicists, to name a few. A constant exchange of ideas and information across disciplines encourages holistic development of responsible AI systems.

Mickey feels strongly about learning what others bring to the table. “You have to be intentional about building a common language, so you can work across disciplines. This takes purposeful, intentional, effort, learning about that other discipline so you can work together,” she said.

In conversation with Mary Gray (also of Microsoft), she emphasized the importance of qualitative research in AI.

Embracing AI in Your UX Toolkit

As AI pervades every field, it’s become impossible to ignore for UX professionals. The question is, is it worth staunchly resisting? Or should you embrace this new technology, understand both how it can help in your jobs, and its limitations?

We think it’s the latter.

Mickey is quick to point out that AI has replaced people who earned their living transcribing interviews (Marvin’s Live Transcription has been a game-changer for Microsoft researchers). Transcription conducted by AI is much more efficient, freeing up researcher’s time and allowing for speedier analysis. 

How Do UX professionals Become AI-ready?

UX researchers need not be experts in training models, but must have a fundamental understanding of how they work. It’s important to recognize that a system is probabilistic in nature, has an input and output, and the fact that they will be wrong and make mistakes. Understanding some of these inevitable system mistakes will eventually make users’ lives easier. Using this information, professionals can plan the user experience around these snags and help users recover from errors or problems.

Clear that some technologies have undoubtedly made researchers’ lives easier. Of others, Mickey is less convinced. 

Microsoft piloted a cognitive AI service that was meant to carry out affect recognition of individuals (their emotional state or stress levels).

“We’ve taken it off the market because we realized that it stood against our responsible AI principles. Research shows that there’s a lack of validity in recognizing actual emotions,” Mickey said.

She reminded us that qualitative analysis is fundamentally through a researcher’s lens – the research question and the research goal. It will always need the human layer. 

“I do not believe that AI is anywhere near replacing UX research. For goodness sake, you need a PhD in that!” she exclaimed.

Self Governance of Responsible AI

As governments and regulatory agencies struggle to keep up with the pace of rapid AI innovation, the onus falls on companies themselves to self-regulate. 

Mickey touches on how this iterative process began at Microsoft:

A host of researchers were engaged to craft the firm’s internal policy based on the pillars above – one that their AI products must comply with. They used learnings from the first version to inform creation of a second version. 

Included in their documentation are transparency notes for AI services and models – these detail the capabilities and limitations of the system in question. They also detail the context in which the model has been tested, the users it’s been tested for and importantly, the users it hasn’t been tested for. Several iterations later, Microsoft’s responsible AI policies are now available publicly.

To test AI technologies, Mickey suggests employing a similar tactic to “red teaming”.  Red teaming involves allowing ethical hackers to attack and spot vulnerabilities in a system. The thinking is that It acts as a risk assessment of one’s security. Mickey suggests assembling a diverse team of people who could “poke at a model or product and identify what could go wrong.”

With AI legislation on the horizon, Mickey thinks that companies have to make a choice:

“It is possible to stay ahead of the curve. With something as big as responsible AI, there’s a difference between being ethical and abiding by the law. As a company, you need to figure out where you want to be. Often, practicing responsible AI is going to mean doing more than legislation requires in a lot of countries,” she said.

UX Researchers Are Critical to Responsible AI

UX practitioners have never had it easy. Always clamoring for a seat at the table, and routinely axed in times of financial difficulty, they constantly face an uphill battle to prove the legitimacy of their craft. Amidst all the uncertainty surrounding their future, Mickey ended with some encouraging final words for UX pros:

“UX plays an important role in AI and responsible AI. We fought this fight before as a field, and it seems we might need to fight it again. As long as we do our due diligence in learning, so that our own understanding and expectations are managed and realistic with regards to AI. I hope that one takeaway is to have that confidence that, yes, there is a role for UX and it’s a critical one,” she said. 

Want to read additional resources about the road to more responsible AI?

Photo by Miguel Bruna on Unsplash

withemes on instagram