Human-AI Collaboration for Decision-Making
Science & Technology
Introduction
Welcome
Welcome everybody to this session on human-AI collaboration for decision-making. My name is Desmond Anusha, and I'm a researcher in the Adaptive Systems and Interaction group here at Microsoft Research AI. Today I will be co-chairing this session together with Ajay Kumar, who is also a researcher in the same group.
Faculty Summit Focus
The focus of this Faculty Summit has been the future of work, and of course, AI technologies are beginning to play significant roles in our everyday work. We aim for human-AI collaboration to become seamless and fluent. But as many people in the room know, collaboration isn't easy. It’s not a straightforward process of piecing together parts and hoping they work. Many aspects of collaboration need to be planned and optimized carefully, either in human and AI interaction or at the very foundation of the algorithms themselves.
Amazing Line of Speakers
Today, we have an amazing lineup of speakers who have worked on various aspects of collaboration—whether in the physical world, in human-AI decision-making, or from algorithmic transparency and interpretability perspectives.
First Speaker: Ayanna Howard
The first speaker I would like to introduce is Ayanna Howard. Ayanna is the Linda and Mark Smith Professor at Georgia Tech University and the Chair of the School of Interactive Computing, where she leads the lab on human and automation systems. Her research focuses on the concept of humanized intelligence, which she defines as the process of embedding human cognitive capabilities into the control path of autonomous systems. Please welcome Ayanna.
(Applause)
Presentation by Ayanna Howard
Ayanna begins by highlighting that she is an experimentalist who looks at robotic systems and intelligent autonomous systems that interact with real people in real-world environments—not just lab settings with students. She focuses on using robots and AI technologies to engage children with special needs in therapy and educational activities. According to Ayanna, AI systems, much like human systems, should optimize and plan collaborations to achieve the best results.
Children with disabilities make up a large demographic, and caring for them can benefit from robotic assistance, especially for motor or behavioral therapy. Ayanna discusses that the design of these systems should be based on human-human interaction norms to ensure that they positively influence outcomes.
Ayanna presents some of her experiments, including how children interacting with these robots exhibit trust in the AI systems. This over-trust, however, can lead to concerns because humans tend to rely heavily on these systems without questioning them, even when the AI systems are wrong.
Lastly, she highlights the biases that can seep into AI systems, often resulting in them performing differently across various demographics. She proposes some solutions to this issue, like creating filters or lenses to ensure the system works accurately across different age groups or demographics.
Second Speaker: Jon Kleinberg
The next speaker is Jon Kleinberg. Jon discusses the broader notion that AI should not just replace human effort but also help in allocating it more effectively. He uses medical diagnosis as a canonical example to introduce a model that triages tasks between human and AI systems. Jon argues that neither full automation nor no automation is optimal but rather a balance of both.
To support his point, Jon dives into the realm of chess, which serves as a perfect model system due to its highly instrumented nature. By analyzing millions of chess games from different skill levels, Jon illustrates how we can understand human errors better and automate parts of the task appropriately.
Third Speaker: Rich Caruana
The final speaker is Rich Caruana, who discusses the necessity of interpretable and transparent AI models. He introduces the concept of Generalized Additive Models with pairwise interactions (GA^2M). This model aims to provide high accuracy while remaining interpretable.
Rich presents an example of using GA^2M on a pneumonia dataset. By visualizing how different features affect the outcome, he shows how this model helps in understanding the correlations and relationships within the data. Rich emphasizes that while the most accurate models often appear as black boxes, interpretable models like GA^2M can provide both accuracy and clarity.
Some interesting points in his presentation include the importance of good tools in making sense of complex data and the intricate relationship between model accuracy and interpretability. He also stresses the importance of context when judging the correctness of a model.
Conclusion
The session concludes with the understanding that human-AI collaboration requires careful optimization of both human effort and AI capabilities. By focusing on transparent and interpretable AI systems, we can ensure that the collaboration results in better decision-making and outcomes.
Keywords
- Human-AI Collaboration
- Decision-Making
- Interpretability
- Transparency
- AI Systems
- Optimization
FAQ
Q: What is humanized intelligence? A: Humanized intelligence is the process of embedding human cognitive capabilities into the control path of autonomous systems.
Q: Why is over-trust in AI systems a concern? A: Over-trust in AI systems is a concern because humans may rely too heavily on AI without questioning its decisions, even when the AI is wrong.
Q: What is the role of interpretable models in human-AI decision-making? A: Interpretable models help ensure that human-AI collaboration remains transparent and that the AI’s decisions can be understood and trusted by humans.
Q: What are Generalized Additive Models with pairwise interactions (GA^2M)? A: GA^2M is a type of machine learning model that aims to combine high accuracy with interpretability by focusing on additive models with pairwise interactions.
Q: Can AI systems improve the allocation of human effort? A: Yes, AI systems can help in better allocating human effort by triaging tasks and optimizing the workflow, as discussed in the medical diagnosis example by Jon Kleinberg.
Q: How do biases affect AI systems? A: Biases can lead AI systems to perform differently across various demographics, which is why it is crucial to address these biases to ensure accurate and fair outcomes.