Adaptive experimental design (AED), or active learning, leverages already-collected data to guide future measurements, in a closed loop, to collect the most informative data for the learning problem at hand. In both theory and practice, AED can extract considerably richer insights than any measurement plan fixed in advance, using the same statistical budget. Unfortunately, the same mechanism of feedback that can aid an algorithm in collecting data can also mislead it: a data collection heuristic can become overconfident in an incorrect belief, then collect data based on that belief, yet give little indication to the practitioner that anything went wrong. Consequently, it is critical that AED algorithms are provably robust with transparent guarantees. In this talk I will present my recent work on near-optimal approaches to adaptive testing with false discovery control and the best-arm identification problem for linear bandits, and how these approaches relate to, and leverage, ideas from non-adaptive optimal linear experimental design.
Kevin Jamieson is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and is the Guestrin Endowed Professor in Artificial Intelligence and Machine Learning. He received his B.S. in 2009 from the University of Washington, his M.S. in 2010 from Columbia University, and his Ph.D. in 2015 from the University of Wisconsin – Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc with Benjamin Recht at the University of California, Berkeley. Jamieson’s research explores how to leverage already-collected data to inform what future measurements to make next, in a closed loop. His work ranges from theory to practical algorithms with guarantees to open-source machine learning systems and has been adopted in a range of applications, including measuring human perception in psychology studies, adaptive A/B/n testing in dynamic web-environments, numerical optimization, and efficient tuning of hyperparameters for deep neural networks.