Contrary to popular belief, artificial intelligence is not just machines processing multitudes of quantitative data. There is a distinct qualitative, human involvement in the process.
“It’s about mixing a moment of human judgment and everything else that can be predictably put in play for a system. I almost can’t understand how somebody thinks AI happens without user research.”
Those words of wisdom come from Microsoft’s Senior Principal Researcher and MacArthur Genius Fellow Mary L. Gray. Mary is a leading voice in the evolving field of AI and ethics, where computers and social sciences converge. She sat down with us to talk about her anthropological background, and how it shapes her focus on how technology impacts people’s lives.
Read on for a sneak peek of the interview.
Why Qualitative Research Is the Linchpin of Better AI
Mary pointed to a seismic shift in the way we use technology — from being peripheral to becoming wholly ingrained in our daily lives. The power of technology has transformed how we work, play and sleep. However, fields such as AI have already exhibited harmful effects on individuals, groups and society at large. Her view, therefore, is a little different:
“I focus on how technology fails people.”
Focus on the Human Side of User Research
“If we think about social media or any technical system, we have what we might call infrastructure; they are so much a part of facilitating communication and information exchange. A great deal of responsibility should be attached to building these systems.
“My interest is thinking about the responsibilities that come with the agreements we’re making across all these systems. Both for the companies, the people that design and sell them, but also each of us. What are our responsibilities to each other? We haven’t really asked that question.
“What follows if we refocus our attention and see people and social groups as our objects of not just analysis, but the constituency to which building systems are obligated?”
The Researcher’s Responsibility for Ethical AI and User Research
Mary was appointed to the Committee on Responsible Computing Research at the National Academy of Sciences.
The committee’s report highlighted the need for the research community to consider the impact of their systems. They establish guidelines and best practices to address the complex ethical and societal challenges that arise from computing research.
An intellectual powerhouse led by Barbara Grosz, Mary credits her with spearheading the effort and assembling thought leaders from various disciplines:
“The report came from Barbara Grosz’ vision of her discipline’s current dilemma about, ‘what can be done differently?’ She was one of the first to track (that) it requires us to think at the very beginning (about) what could happen, what follows.
“She made a point of populating this study, commissioned by the National Science Foundation, requested by Congress. Very smartly, she was looking for representatives from different areas, not just computer science and engineering (CS&E). They could speak to domains such as health, labor and economics, where the ubiquity of computing systems and the growth of AI has been moving forward without the input of domain experts.
“Hats off to her. She was the one who saw (that) what we need most is to understand, ‘What happens when we wait until something’s built to evaluate it?’ That is really too late.
“Intellectually, that committee was unbelievable. It was fantastic to be in a Zoom room with so many brilliant people, coming at this question with a lot of humility and willingness to consider rethinking what can or should be done.”
The Implications for Researchers… and Collaborative User Research
“Looking at the AI Accountability Act, it was trying to figure out, ‘if we’re accountable, what do we need to change about these disciplines, these scientific approaches to innovation?’ In most cases, anybody with a CS&E background isn’t trained to think about the context that’s going to tee up, what could follow or who’s going to benefit. Staying within CS&E, there is a collaborative approach to seeing who you need to bring in to refine what you have in mind. That team sport approach to building systems lends itself to incorporating UX research.
“There’s a need for the UX researcher that is thinking about UI. It’s trained — the person trained to deliver the best design approach to engagement is a part of a team. What is that dream team that you want together from the very beginning, from ideation? We all have somebody in mind who might benefit from what we build.
“User researchers are expected to do a lot of work that needs to be spread across a range of subject matter experts, including the person who’s going to be experiencing the pain and the benefit of that system. I don’t just mean the person, I mean categories / groups of people we have to reorient to. We are not solving problems for individuals. We’re looking at social relationships and make the unit of analysis for the build, a relationship among different institutions and people. That’s new.
“With tools like Marvin, it’s like having a qualitative perception of a range of takes. Some of it may not be necessary. You can’t know what you need in qualitative research until you’ve collected it. Qualitative work is about the collection. Go back, and in the moment what seems negligible turns out to be incredibly important. And that team approach to doing that, that’s new too.”
AI Companies Must Use Qualitative Data
“We can no longer afford to probe (or) A/B test our way through systems at the scale at which they operate. They’re completely unregulated. The reality is, anybody who wants to build a system and put it out there right now, there’s very little that they’re called upon to account for. How much did you think about the differential impact of what you built?
“To reorient to that requires conceding that moment of prototyping. There is already (an) embedded set of assumptions that have to be qualitatively interrogated. You can qualitatively analyze, rigorously understand a question that you’re posing that you think technology is going to answer, and start with that. What is the presumed question? To do that, you’ve got to get a crew of subject matter experts together who can help identify those assumptions.
“The exciting thing to me is, we haven’t even started trying this yet. I’m curious what happens when we let go. What is ostensibly the power of not having to ask somebody’s opinion, not validating it as expertise that we don’t have? What could we build together? It’s exciting.”
The Essential Role of Qualitative Researchers within AI
“The last decade, it hasn’t been explained to the general public that there’s something quite powerful about just having a lot of data. And it is powerful, (but) it’s still a pile of data. It doesn’t mean anything, it doesn’t tell us anything, and it can’t interpret the world. That’s a human capacity.
“It’s a pretty amazing technological feat to take those prior (data) and, without the structure of what you could be predicting, come up with a prediction that would be useful.
“My frustration is, it means nothing if you don’t have people trained to understand how that decision came to be made. Most importantly, how is that a decision that cannot be replicated with a different group of people?
“Apply that assumption to hate speech — there is no easy solution to identifying hate speech or disinformation through AI. We desperately need more user research to understand under what conditions does somebody look at something and think, ‘that’s right.’ We’re never going to get to the place where misinformation, disinformation, hate speech are stable or static enough to stop needing user research.”
This interview was transcribed with (you guessed it) Marvin. Only minimal edits were needed. The whole conversation with Mary was a delight — you’ll definitely want to watch the full recording.
Photo by ThisisEngineering RAEng on Unsplash