PUBLIC TALK (Lytle Lecture)
Monday, Nov. 14, 3:15 – 4:30 p.m. (Pacific Time) | UW HUB Lyceum
Title: Robotics algorithms that take people into account
I discovered AI by reading “Artificial Intelligence: A Modern Approach”. What drew me in was the concept that you could specify a goal or objective for a robot, and it would be able to figure out on its own how to sequence actions in order to achieve it. In other words, we don’t have to hand-engineer the robot’s behavior — it emerges from optimal decision making. Throughout my career in robotics and AI, it has always felt satisfying when the robot would autonomously generate a strategy that I felt was the right way to solve the task, and it was even better when the optimal solution would take me a bit by surprise. In “Intro to AI” I share with students an example of this, where a mobile robot figures out it can avoid getting stuck in a pit by moving along the edge. In my group’s research, we tackle the problem of enabling robots to coordinate with and assist people: for example, autonomous cars driving among pedestrians and human-driven vehicles, or robot arms helping people with motor impairments (together with UCSF Neurology). And time and time again, what has sparked the most joy for me is when robots figure out their own strategies that lead to good interaction — when we don’t have to hand-engineer that an autonomous car should inch forward at a 4-way stop to assert its turn, for instance, but instead, the behavior emerges from optimal decision making. In this talk, I want to share how we’ve set up optimal decision making problems that require the robot to account for the people it is interacting with, and the surprising strategies that have emerged from that along the way. And I am very proud to say that you can also read a bit about these aspects now in the 4th edition of “Artificial Intelligence: A Modern Approach”, where I had the opportunity to edit the robotics chapter to include optimization and interaction.
TECHNICAL TALK (Colloquium Lecture)
Tuesday, Nov. 15, 10:30 – 11:30 a.m. (Pacific Time) | UW ECE 125
Title: On human models for human-robot interaction
Much of my work has dealt with human-robot interaction by pretending that people are like robots: assuming they optimize for utility, and run Bayes filters to maintain estimates over what they can’t directly observe. It’s somewhat surprising that this approach works at all, given that behavioral economics has long warned us that people are a bag of heuristics and cognitive biases, which is a far cry from “rational” robot behavior. On the other hand, treating people as black boxes and throwing a lot of data at the problem leads to models that are themselves a bag of spurious correlations that produce amazingly accurate predictions in distribution, but fail spectacularly outside of that context. This has left me with the question: how do we get accurate, yet robust, human models? One idea I want to share in this talk is that perhaps many of the aspects of human behavior that seem arbitrary, inconsistent, and time-varying, might actually be explained by acknowledging that people make decisions using inaccurate estimates that evolve over time. This is far away from a perfect model, but it greatly expands the space of useful models for robots and AI agents more broadly.
Anca Dragan is an associate professor in the EECS Department at UC Berkeley. Her goal is to enable robots (and AI agents more broadly) to work for and around people. She runs the InterACT laboratory, where she focuses on algorithmic human-robot interaction: algorithms that move beyond the robot’s function in isolation and generate robot behavior that coordinates well with human actions and is aligned with what humans actually want the robot to do. Anca received her Ph.D. from Carnegie Mellon University’s Robotics Institute. She helped found the Berkeley AI Research Laboratory, and is co-principal investigator of the Center for Human-Compatible AI. She has been honored by the Presidential Early Career Award for Scientists and Engineers (PECASE), NSF CAREER, Sloan, Okawa, ONR Young Investigator Award, MIT TR35, and the IEEE RAS Early Academic Career Award.