Speaker: Biresh Kumar Joardar
Date/Time: Thursday, September 23, 2021
Advanced computing systems have long been enablers for breakthroughs in Machine Learning (ML). However, as ML algorithms become more complex and size of the datasets increase, existing computing platforms are no longer sufficient to bridge the gap between algorithmic innovation and hardware design. For example, DNN training can be accelerated on GPUs. However, GPUs are bandwidth-bottlenecked, which can lead to sub-optimal performance. New designs such as processing-in-memory, where the memory is placed close to the computing cores, can address these limitations. However, designing these new architectures often involves optimizing multiple conflicting objectives (e.g., performance, power, thermal, reliability, etc.). Design problems are further exacerbated by the availability of different core architectures (CPUs, GPUs, NVMs, FPGAs, etc.) and interconnection technologies (e.g., TSV-based stacking, M3D, Photonics, Wireless etc.), each with a set of unique design requirements that need to be satisfied. The resulting diversity in the choice of hardware has made the design, evaluation, and testing of new architectures an increasingly challenging problem.
In this presentation, we will discuss how machine learning techniques can be used to solve complex hardware design problems (and vice versa). More specifically, we will highlight the symbiotic nature of relationship between hardware design and machine learning. We will demonstrate how machine learning techniques can be used for advancing hardware designs spanning edge devices to cloud, which will empower further advances in machine learning (i.e., Machine learning for machine learning).
Biresh Kumar Joardar is currently an NSF-sponsored Computing Innovation (postdoctoral) Fellow at the Department of Electrical and Computer Engineering at Duke University. He obtained his PhD from Washington State University in 2020. His PhD research focused on using machine learning algorithms to design and optimize heterogeneous manycore systems. As a CI Fellow, Biresh is currently working on developing reliable and energy-efficient architectures for machine learning applications. He received the ‘Outstanding Graduate Student Researcher Award’ at Washington state University in 2019. Biresh has published in numerous prestigious conferences (including ESWEEK, DATE, ICCAD) and journals (TC, TCAD and TECS). His work have been nominated for Best Paper Awards at DATE 2019 and DATE 2020. He won the best paper award in NOCS 2019. His current research interests include machine learning, manycore architectures, accelerators for deep learning, hardware reliability and security.
Host: Prof. Mohammad Al Faruque