Abstract:
Trustworthy AI requires secure hardware that protects data confidentiality and integrity. Yet security is often an afterthought, while AI hardware is optimized primarily for performance and energy efficiency. My vision is to establish AI hardware as the root of trust, enforcing confidentiality and integrity through hardware mechanisms without sacrificing these critical design goals.
This talk presents three contributions toward this vision. First, I show that security solutions are not just simple add-ons, but rather fundamentally reshape AI hardware behaviors. I introduce SecureLoop, an architectural framework for exploring AI accelerators with cryptographic memory protections. SecureLoop optimizes memory access patterns under the constraints imposed by cryptography, achieving up to 50% improved energy-delay product. Second, I translate these insights into silicon. I present Sorbet, a fabricated AI accelerator chip providing comprehensive memory protection with modest energy and area overheads. Third, I enable scalable efficiency modeling for off-the-shelf GPUs, laying the groundwork for system-level security-efficiency evaluation. I propose EnergAIzer, a fast GPU energy model that achieves the accuracy of detailed simulations at a fraction of the time. Looking ahead, my research will advance proactivity, scalability, and sustainability challenges of secure AI hardware.
Bio:
Kyungmi Lee is a Postdoctoral Associate in the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, where she received her Ph.D. in 2024, working with Anantha P. Chandrakasan. Her research focuses on designing secure and efficient hardware support for trustworthy AI. Her broad research interests span computer architecture, VLSI/digital circuits, and AI systems. She received the MIT MTL Doctoral Dissertation Award (2024) and the Siebel Scholars (2020).
Kyungmi Lee (she/her)
Massachusetts Institute of Technology
ECE 037
12 Feb 2026, 10:30am until 11:30am
Sajjad Moazeni

