COMPUTER SCIENCE RESEARCH WEEK JANUARY 2025
Professor Kevin Jamieson, Paul G. Allen School of Computer Science & Engineering at the University of Washington
Professor Ana Klimovic, Department of Computer Science at ETH Zurich
Professor Barna Saha, Department of Computer Science and Information Technologies at the University of California San Diego
COM3 Level 1
Multipurposed Hall 1, 2 and 3 [COM3 01-26, 01-27 and 01-28]
closeThis is a distinguished talk as part of the NUS Computer Science Research Week 2025 https://researchweek.comp.nus.edu.sg/
10:00 – 11:20 Computer Security in Known-Adversary Threat Models - Tom Ristenpart
Abstract: I’ll describe our work exploring computer security in contexts where the adversary is known to the victim, and a member of their social circles. Such known-adversary threats come up in a variety of interpersonal abuse contexts, such as intimate partner violence, human trafficking, and more. I’ll provide an overview of our work with IPV survivors to understand the computer security issues they face, our work developing interventions to directly assist people facing known-adversary threats, and how this has inspired new lines of research on authentication, cryptography, and more.
Bio: Thomas Ristenpart is a Professor at Cornell Tech and a member of the Computer Science department at Cornell University. Before joining Cornell Tech in May, 2015, he spent four and a half years as an Assistant Professor at the University of Wisconsin-Madison. He completed his PhD at UC San Diego in 2010. His research spans a wide range of computer security topics, with recent focuses including digital privacy and safety in intimate partner violence, anti-abuse mitigations for encrypted messaging systems, improvements to authentication mechanisms including passwords, and topics in applied and theoretical cryptography. His work is routinely featured in the media and has been recognized by numerous distinguished paper awards, two ACM CCS test-of-time awards, a USENIX Security test-of-time award, an Advocate of New York City award, an NSF CAREER Award, and a Sloan Research Fellowship.
13:00 - 14:20 On the Instance-dependent Sample Complexity of Reinforcement Learning - Kevin Jamieson
Abstract: An autonomous agent is placed in an unfamiliar environment with unknown rules and hidden rewards. How quickly can the agent learn to navigate and maximize its accumulated rewards? This question lies at the heart of reinforcement learning (RL), a sequential decision making problem formulation with applications ranging from video games to personalized healthcare.
In this talk, I will start with the most fundamental reinforcement learning scenario: environments with a finite set of states, actions and time horizon, which illustrates core challenges and foundational methods in the field. The first half of the lecture will introduce multi-armed bandits and classical strategies for RL like Upper Confidence Bound (UCB), which are provably optimal for worst-case environments. In the second half, I will present my lab’s recent work on instance-dependent approaches—algorithms that not only handle worst-case scenarios but also adaptively capitalize on easier environments, yielding better performance when possible. These results offer new insights into what makes certain reinforcement learning problems easy or difficult, ultimately providing a complete characterization of these challenges.
Bio: Kevin Jamieson is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. He received his B.S. in 2009 from the University of Washington, his M.S. in 2010 from Columbia University, and his Ph.D. in 2015 from the University of Wisconsin - Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc at the University of California, Berkeley working with Benjamin Recht. Jamieson's work has been recognized by an NSF CAREER award and Amazon Faculty Research award.
15:00 – 16:20 Rethinking System Software for Efficient, Elastic Cloud Computing - Ana Klimovic
Abstract: Cloud computing platforms have evolved from renting virtual machines to providing elastic compute and storage services that abstract hardware resources from users. Shifting the responsibility of resource allocation and task scheduling from cloud users to providers makes the cloud easier to use and gives providers the opportunity to optimize under the hood for performance and energy efficiency.
Yet, elastic compute services like Functions as a Service (FaaS) are highly inefficient for providers to run today and offer poor performance guarantees for users. This is largely because today’s FaaS platforms are still based on system software that was designed for the older, more traditional cloud execution model of users renting long-running virtual machines. FaaS platforms run user code as a black box inside MicroVMs, limiting the provider's ability to optimize scheduling and data fetching. Furthermore, despite being more lightweight than traditional VMs, MicroVMs still incur significant startup times. To minimize the impact on request latency, providers keep many MicroVMs pre-initialized in memory. This is expensive and leads to variable performance for users as some requests still incur cold starts.
Rather than retrofitting traditional VMs and cloud orchestration systems, we propose Dandelion, a clean slate platform for elastic cloud computing. To build Dandelion, we co-design a new cloud-native programming model and execution system that: 1) exposes application dataflow to the cloud platform, such that the platform can perform application-aware scheduling and data prefetching, 2) reduces the attack surface of untrusted application code, such that the platform can securely isolate tasks with lightweight sandboxes that boot faster and can be binpacked more densely than MicroVMs, and 3) abstracts hardware, enabling the platform to seamlessly offload different types of tasks to heterogeneous hardware (including GPUs and SmartNICs) to further optimize performance per cost. Dandelion aims to enable fast, efficient, elastic computing with secure isolation guarantees for cloud applications such as distributed log processing, elastic data processing with user-defined functions, and agentic AI systems.
Bio: Ana Klimovic is an Assistant Professor in the Systems Group of the Computer Science Department at ETH Zurich. Her research interests span operating systems, computer architecture, and their intersection with machine learning. Ana's work focuses on computer system design for large-scale applications such as cloud computing services, data analytics, and machine learning. Before joining ETH in August 2020, Ana was a Research Scientist at Google Brain and completed her Ph.D. in Electrical Engineering at Stanford University.
16:30 – 17:50 Role of Structured Matrices in Fine-Grained Algorithm Design - Barna Saha
Abstract: Fine-grained complexity attempts to precisely determine the time complexity of a problem and has emerged as a guide for algorithm design in recent times. Some of the central problems in fine-grain complexity deals with computation of distances. For example, computing all pairs shortest paths in a weighted graph, computing edit distance between two sequences or two trees, and computing distance of a sequence from a context free language. Many of these problems reduce to computation of matrix products over various algebraic structures, predominantly over the (min,+) semiring. Obtaining a truly subcubic algorithm for (min,+) product is one of the outstanding open questions in computer science.
Interestingly many of the aforementioned distance computation problems have some additional structural properties. Specifically, when we perturb the inputs slightly, we do not expect a huge change in the output. This simple yet powerful observation has led to better algorithms for many problems for which we were able to improve the running time after several decades. This includes problems such as the Language Edit Distance, RNA folding, and Dyck Edit Distance. Indeed, this structure in the problem leads to matrices that have the Lipschitz property, and we gave the first truly subcubic time algorithm for computing (min,+) product over such Lipschitz matrices. Follow-up work by several researchers obtained improved bounds for monotone matrices, and for (min,+) convolution under similar structures leading to improved bounds for a series of optimization problems. These result in not just faster algorithms for exact computation but also for approximation algorithms. In particular, we show how fast (min,+) product computation over monotone matrices can lead to better additive approximation algorithms for computing all pairs shortest paths on unweighted undirected graphs, leading to improvements after twenty four years.
Bio: Barna Saha is The Harry E. Gruber Endowed Chair Professor at the University of California San Diego holding faculty appointments in the Department of Computer Science & Engineering, and Hal??c??o??lu Data Science Institute. She is the Director of the National NSF TRIPODS Institute for Emerging CORE Methods in Data Science (EnCORE) at the University of California San Diego. Previously, Saha was a tenured Associate Professor at the University of California Berkeley, and even before that on the faculty of Computer Science at UMass Amherst, and as a Senior Research Scientist at the AT&T Shannon Research Laboratory. Saha's research interests are broadly in theoretical computer science. Specifically, she studies designing fast algorithms for classical problems as well as problems in the emerging fields of AI and Data Science. Among her many accolades include a Presidential Early Career Award for Scientists and Engineers (PECASE) which is the highest honor given by the United States Government to early-career researchers, a Sloan fellowship, an NSF CAREER award, and multiple industry faculty fellowship awards.