Home » Speakers

Category Archives: Speakers

Generalized Caching as a Service

Title: Generalized Caching as a Service

Speaker: Raju Rangaswami

Abstract: Data centers today host large numbers of workloads and many of these workloads consume significant storage resources. Given the long history of successes in storage caching, it is only natural such successes bear fruit in modern data centers, at scale. In this talk, we motivate a new approach for building a generalized caching service for cloud data centers. Departing from existing application, storage, or data-type specific caches, this service unifies and abstracts data center caching resources making these available to any workload and for any data type. Also departing from past caching practices, this caching service is fault-tolerant, allowing it to cache writes without risk of data loss. Finally, as expected from production storage systems, this caching service also implements per-workload performance guarantees.

Short Bio: Dr. Raju Rangaswami is a Professor of Computer Science at Florida International University where he directs the Systems Research Laboratory. His work has focused on computer systems and software, including operating systems, distributed systems, storage systems, computer security, and real-time systems as well as application domains such as web services, databases, cloud computing, and mobile computing. He is a recipient of the NSF CAREER award and the Department of Energy CAREER award. His research is also supported by industry entities including IBM, Intel, NetApp, and Seagate.

Offloading Intra-Server Orchestration to Smart NICs

Title: Offloading Intra-Server Orchestration to Smart NICs

Speaker: Aditya Akella

Abstract: Orchestrating requests at a datacenter server entails load balancing and scheduling requests belonging to different services across CPUs, and adapting CPU allocation to request bursts. It plays a central role in meeting tight tail latency requirements and ensuring high throughput and optimal CPU utilization. Today’s server-based orchestration approaches are neither scalable nor flexible. In this talk, I will argue for offloading orchestration entirely to the server’s network interface card (NIC). I will present RingLeader, a new programmable “smart” NIC with novel hardware units for software-informed request load balancing and programmable scheduling, and a new light-weight OS-NIC interface that enables close NIC-CPU coordination and supports NIC-assisted CPU scheduling. I will conclude my talk with examples of other ways that smart NICs are changing the landscape of data center computing.

Short Bio: Aditya Akella is a Regents Chair Professor of Computer Science at UT Austin. Aditya received his B. Tech. from IIT Madras (2000), and Ph.D. from CMU (2005). His research spans computer systems and networking, focusing on programmable networks, formal methods in systems, and systems for big data and machine learning. His work has influenced the infrastructure of some of the world’s largest online service providers. Aditya has received many awards for his contributions, including the ACM SIGCOMM Test of Time Award (2022), selection as a finalist for the US Blavatnik National Award for Young Scientists (2020 and 2021), UW-Madison “Professor of the Year” award (2019 and 2017), IRTF Applied Networking Research Prize (2015), SIGCOMM Rising Star award (2014), NSF CAREER award (2008), and several best paper awards.

Decarbonizing Cloud Computing Using CarbonFirst

Title: Decarbonizing Cloud Computing Using CarbonFirst

Speaker: Prashant Shenoy

Abstract: In this talk, I will discuss our CarbonFirst approach to designing sustainable cloud computing systems. Our goal is to make carbon efficiency a first-class design metric, similar to traditional metrics of performance and reliability. I will explain how today’s systems can be made first carbon-aware by exposing energy and carbon usage information to software platforms and then made carbon-efficient by providing control over the system’s carbon usage. I will present application case studies on how modern cloud applications can employ these mechanisms to reduce their carbon footprint. I will end with some open research questions in the field of sustainable computing.

Short Bio: Prashant Shenoy is currently a Distinguished Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst. He received the B.Tech degree in Computer Science and Engineering from the Indian Institute of Technology, Bombay and the M.S and Ph.D degrees in Computer Science from the University of Texas, Austin. His research interests lie in distributed systems and networking, with a recent emphasis on cloud and sustainable computing. He has been the recipient of several best paper awards at leading conferences, including a Sigmetrics Test of Time Award. He is a fellow of the ACM, the IEEE, and the AAAS.

Attention-Driven Software Architecture for Autonomous Robotic Agents

Title: Attention-Driven Software Architecture for Autonomous Robotic Agents

Speaker: Hyoseung Kim

Abstract: While there has been significant progress in developing autonomous systems, challenges still remain in effectively processing vast amounts of sensor data and making timely decisions, especially for small robotic agents with limited computing power. In this talk, I will introduce our recent project aimed at addressing this issue. The primary goal of this project is to create an attention-driven software architecture that can identify and prioritize critical information from sensors, enabling timely decision-making while considering resource constraints and uncertainties in the environment. This architecture is designed to holistically optimize computation scheduling, perception, and planning, with capabilities to adapt to changes in context and anticipate future actions. Our research tasks involve context adaptive scheduling of autonomous computation pipelines, learning-based perception to anticipate future actions in dynamic environments, and motion planning and decision making based on anticipated actions in the presence of uncertainty. I will share preliminary results motivating our research, and discuss expected technical accomplishments and their potential to tackle fundamental challenges associated with time-sensitive scenarios in resource-constrained autonomous systems.

Short Bio: Hyoseung Kim is an Associate Professor in the Department of Electrical and Computer Engineering at the University of California, Riverside (UCR). He is also serving as the Chair of the Computer Engineering Program at UCR. He received the PhD degree in Electrical and Computer Engineering from Carnegie Mellon University in 2016, and the MS and BS degrees in Computer Science from Yonsei University, Korea, in 2007 and 2005, respectively. His research interests are in real-time embedded and cyber-physical systems, autonomous systems, and smart sensing, with a focus on the intersection of systems software, hardware platforms, and analytical techniques. His research projects have been supported by NSF, ONR, DoD, DoJ, USDA/NIFA, etc. He is a recipient of the NSF CAREER Award and the Fulbright Scholarship Award. His research contributions have been recognized with Best Paper Awards at RTAS and RTCSA, and Best Paper Nominations at EMSOFT and ICCPS. For more information, please visit https://www.ece.ucr.edu/~hyoseung/.

Mosaic Pages: Increasing TLB Reach with Reduced Associativity Memory

Title: Mosaic Pages: Increasing TLB Reach with Reduced Associativity Memory

Speaker: Donald E. Porter

Abstract: The TLB is increasingly a bottleneck for big data applications. In most designs, the number of TLB entries are highly constrained by latency requirements and growing much more slowly than the working sets of applications. Many solutions to this problem, such as huge pages, perforated pages, or TLB coalescing, rely on physical contiguity for performance gains, yet the cost of defragmenting memory can easily nullify these gains. This talk introduces mosaic pages, which increase TLB reach by compressing multiple, discrete translations into one TLB entry. Mosaic leverages virtual contiguity for locality but does not use physical contiguity. Mosaic relies on recent advances in hashing theory to constrain memory mappings, in order to realize this physical address compression without reducing memory utilization or increasing swapping. Our results show that Mosaic’s constraints on memory mapping do not harm performance and reduce TLB misses in several workloads by 6-81%.

Short Bio: Don Porter is a Professor of Computer Science at the University of North Carolina at Chapel Hill. Porter’s research interests broadly involve developing more efficient and secure computer systems. Porter earned a Ph.D. and M.S. from The University of Texas at Austin, and a B.A. from Hendrix College. He has received awards including the NSF CAREER Award, the Bert Kay Outstanding Dissertation Award from UT Austin, an ASPLOS Distinguished Paper Award in 2023, an ASPLOS Influential Paper Award in 2022, and Best Paper Awards at FAST 2016, EuroSys 2016, and RTNS 2018. Don Porter is a Professor of Computer Science at the University of North Carolina at Chapel Hill. Porter’s research interests broadly involve developing more efficient and secure computer systems. Porter earned a Ph.D. and M.S. from The University of Texas at Austin, and a B.A. from Hendrix College. He has received awards including the NSF CAREER Award, the Bert Kay Outstanding Dissertation Award from UT Austin, an ASPLOS Distinguished Paper Award in 2023, an ASPLOS Influential Paper Award in 2022, and Best Paper Awards at FAST 2016, EuroSys 2016, and RTNS 2018.

Architecting Computer System Abstraction with Secure Environment in Mind

Title: Architecting Computer System Abstraction with Secure Environment in Mind

Speaker: Yan Solihin

Abstract: In this talk, I will point out that current Trusted Execution Environments (TEE) abstractions of secure enclaves are incompatible with traditional system abstraction of compute (processes and threads) and data (shared memory, files, etc.), making it hard to adopt TEE universally. I will discuss that more research is needed to bring TEE into compatibility with traditional system abstraction and challenges in achieving it.

Short Bio: Yan Solihin is the Director of Cybersecurity and Privacy Cluster, and Charles N. Millican* Professor of Computer Science at University of Central Florida. He obtained his Ph.D. in computer science from the University of Illinois at Urbana-Champaign (UIUC) in 2002. His research interests include computer architecture and system, and secure processors. He is a recipient of 2023 HPCA Test of Time Award, 2010 and 2005 IBM Faculty Partnership Award, 2004 NSF Faculty Early Career Award, and 1997 AT&T Leadership Award. He was one of pioneers in cache sharing fairness and Quality of Service (QoS), efficient counter mode memory encryption, and Bonsai Merkle Tree, which have significantly influenced Intel Cache Allocation Technology and Secure Guard eXtension (SGX). He received IEEE Fellow “for contributions to shared cache hierarchies and secure processors” in 2017. He is listed in the HPCA Hall of Fame, ISCA Hall of Fame, and Computer Architecture Total (CAT) Hall of Fame.

Bringing Foundational Models to Consumer Devices via ML Compilation

Title: Bringing Foundational Models to Consumer Devices via ML Compilation

Speaker: Tianqi Chen

Abstract: Deploying deep learning models on various devices has become an important topic. Machine learning compilation is an emerging field that leverages compiler and automatic search techniques to accelerate AI models. ML compilation brings a unique set of challenges: emerging machine learning models; increasing hardware specialization brings a diverse set of acceleration primitives; growing tension between flexibility and performance. In this talk. I then discuss our experience in bringing foundational models to a variety of devices and hardware environments through machine learning compilation.

Short Bio: Tianqi Chen is currently an Assistant Professor at the Machine Learning Department and Computer Science Department of Carnegie Mellon University. He is also the Chief Technologist of OctoML. He received his PhD. from the Paul G. Allen School of Computer Science & Engineering at the University of Washington. He has created many major learning systems that are widely adopted: XGBoost, TVM, and MLC-LLM.

Systems Research in Quantum Computing

Title: Systems Research in Quantum Computing

Speaker: Frank Mueller

Abstract: Quantum computing has become reality with the deployment of different device technologies accessible through the cloud. However, current hardware technologies pose a number of problems, which require research advances in the systems are of quantum. This talk provides an overview of contemporary problems, sample solutions and open research challenges in this filed. It also highlights benefits of interdisciplinary collaborations and discusses funding opportunities.

Short Bio: Frank Mueller (mueller@cs.ncsu.edu) is a Professor in Computer Science and a member of multiple research centers at North Carolina State University. Previously, he held positions at Lawrence Livermore National Laboratory and Humboldt University Berlin, Germany. He received his Ph.D. from Florida State University in 1994. He has published papers in the areas of quantum computing, parallel and distributed systems, embedded and real-time systems, and compilers. He is a member of ACM SIGPLAN, ACM SIGBED and an ACM Fellow as well as an IEEE Fellow. He is a recipient of an NSF Career Award, an IBM Faculty Award, a Google Research Award and two Fellowships from the Humboldt Foundation.

Digital Transformation of Academic Scientific Environments via IoT Systems

Title: Digital Transformation of Academic Scientific Environments via IoT Systems

Speaker: Klara Nahrstedt, University of Illinois Urbana-Champaign

Abstract: Academic cleanrooms are special scientific environments on campuses where faculty, staff, postdocs, students from physical and life sciences meet and make their discoveries in materials, semiconductors, chip design and other scientific domains. Academic cleanrooms like many other environments (e.g., cities, manufacturing, homes) are going through major digital transformations. In this talk we will discuss the utility of Internet of Things (IoT) systems in Academic Cleanrooms as these systems are becoming an integral part of the digital transformation in academic cleanrooms. We will briefly present the academic cleanrooms’ difficulties that IoT system researchers must understand when researching, augmenting, designing, developing, and then deploying IoT systems in cleanrooms. We will then present the advances of IoT systems in academic cleanrooms that scientists can benefit from and increase their utility in form of speed of scientific innovation and efficiency of processes in academic cleanrooms.

Short Bio: Klara Nahrstedt is the Grainger Distinguished Chair in Engineering Professor in the Computer Science Department, and the Director of Coordinated Science Laboratory in the Grainger College of Engineering at the University of Illinois at Urbana-Champaign. Her research interests are directed toward end-to-end Quality of Service (QoS) and resource management in large scale multi-modal distributed  systems,  networks, and cyber-physical systems. She is the recipient of the IEEE Communication Society Leonard Abraham Award for Research Achievements, University Scholar, Humboldt Research Award, IEEE Computer Society Technical Achievement Award, ACM SIGMM Technical Achievement Award, TU Darmstadt Piloty Prize, and the Grainger College of Engineering Drucker Award. Klara Nahrstedt received her Diploma in Mathematics from Humboldt University, Berlin, Germany in 1985. In 1995, she received her PhD from the University of Pennsylvania in the Department of Computer and Information Science. She is ACM Fellow, IEEE Fellow, AAAS Fellow, Member of the German National Academy of Sciences (Leopoldina Society), and Member of the US National Academy of Engineering.

Chameleon: A Large-Scale, Deeply Reconfigurable Testbed for Computer Science Systems Research

Title: Chameleon: A Large-Scale, Deeply Reconfigurable Testbed for Computer Science Systems Research

Speaker: Kate Keahey

Abstract: We live in interesting times: new ideas and technological opportunities emerge at ever increasing rate in disaggregated hardware, programmable networks, and the edge computing and IoT space to name just a few. These innovations require an instrument where they can be deployed and investigated, and where new solutions that those disruptive ideas require can be developed, tested, and shared. To support a breadth of Computer Science experiments  such instrument has to provide access to a diversity of hardware configurations, support deployment at scale, as well as deep reconfigrability so that a wide range of experiments can be supported. It also has to provide mechanisms for easy and direct sharing of repeatable digital artifacts so that new experiments and results can be easily replicated and help enable further innovation. Most importantly — since science does not stand still – such instrument requires the capability for constant adaptation to support an ever increasing range of experiments driven by emergent ideas and opportunities.

The NSF-funded Chameleon testbed (www.chameleoncloud.org) provides those capabilities. Specifically, they include access to a variety of hardware including cutting-edge architectures, a range of accelerators, storage hierarchies with a mix of large RAM, NVDIMMs, a variety of enterprise and consumer grade SDDs, HDDs, high-bandwidth I/0 storage, LiQid composable hardware, SDN-enabled networking hardware, and fast interconnects. This diversity was enlarged recently to add support for edge computing/IoT devices and will be further extended this year to include GigaIO composable hardware as well as P4 switches. Chameleon is distributed over two core sites at the University of Chicago and the Texas Advanced Computing Center (TACC) connected by 100 Gbps network – as well as four volunteer sites at IIT, NCAR, Northwestern University, and the University of Illinois in Chicago (UIC). Bare metal reconfigurability for Computer Science experiments is provided by CHameleon Infrastructure (CHI), based on an enhanced bare-metal flavor of OpenStack: it allows users to reconfigure resources at bare metal level, boot from custom kernel, and have root privileges on the machines. To date, the testbed has supported 8,000+ users and 1,000+ unique projects in research, education, and emergent applications.

In this talk, I will describe the goals and capabilities of the testbed, as well as some of the research and education projects our users are working on. I will also discuss our new thrusts in support for research on edge computing and IoT, our investment in developing and packaging of research infrastructure (CHI-in-a-Box), as well as our support for composable systems that can both dynamically integrate resources from other sources into Chameleon and make Chameleon resources available via other systems. Lastly, I will also outline the services and tools we created to support sharing of experiments, educational curricula, and other digitally expressed artifacts that allow science to be shared via active involvement and foster reproducibility.

Short Bio: Kate Keahey is one of the pioneers of infrastructure cloud computing. She created the Nimbus project, recognized as the first open source Infrastructure-as-a-Service implementation, and continues to work on research aligning cloud computing concepts with the needs of scientific datacenters and applications. To facilitate such research for the community at large, Kate leads the Chameleon project, providing a deeply reconfigurable, large-scale, and open experimental platform for Computer Science research. To foster the recognition of contributions to science made by software projects, Kate co-founded and serves as co-Editor-in-Chief of the SoftwareX journal, a new format designed to publish software contributions. Kate is a Scientist at Argonne National Laboratory and a Senior Fellow at the Computation Institute at the University of Chicago.