Human-AI collaboration

15 February 2023
Mark Steyvers, professor of cognitive science at University of California, Irvine
Mark Steyvers, professor of cognitive science at University of California, Irvine. Credit: UCI

At SPIE Medical imaging 2023, Mark Steyvers, professor of cognitive science at the University of California, Irvine, will discuss some of the promises and pitfalls of AI-assisted decision-making, where a human decision-maker is aided by AI. His presentation will include empirical research that investigates the effectiveness of AI-assisted decisions, and the cognitive-decision process in different paradigms for presenting AI-generated advice. Steyvers will also discuss the topics of "machine theory of mind" and "theory of machine" — how humans and machines can efficiently form mental models of each other to collaborate more effectively.

What are some of the challenges of working with artificial intelligence? How are those challenges being met?
AI has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, we must understand the conditions that support complementarity, i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation. Second, we need to accurately assess human mental models of the AI, which contain both expectations of the AI as well as strategies on how and when to rely on the AI. Third, we need to understand the effects of different design choices for human-AI interaction, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies.

To meet these challenges, we need to look at human-AI from an interdisciplinary perspective, combining insights from imaging-domain experts, computer scientists, psychologists, human-computer interaction researchers, and related fields.

How would you describe “machine theory of mind" and "theory of machine?"
The term "theory of mind" comes from psychology and refers to the idea that to really understand the actions of another human, it is important to explain their observed behavior in terms of (unobservable) preferences, beliefs, knowledge, and goals. So, "machine theory of mind" describes the concept of an AI that has the capability of understanding human behavior in terms of preferences, beliefs, knowledge, and goals. The concept "theory of machine" refers to the collection of beliefs that people have about AI— what the AI is capable of, when an AI is likely to make mistakes, etc.

Taken together these concepts describe how (ideally) AI has the ability to make inferences about human behavior in the same way that humans have the ability to make inferences about AI.

What do you see as the future of human-AI collaboration in medical imaging? What would you like to see?
Ideally, AIs should become more intelligent in their interaction with humans and have the capability to reason about how to be most useful to the human. For example, AIs should present the most relevant information and avoid overwhelming the human with irrelevant detail. The interaction with AIs should be improved with natural language dialog, allowing the human to query the AI in the same way a human expert might interact with another human expert.

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research