Seminars at CECS

“Programmability, Scalability, and Security for Reconfigurable Computing in the Cloud”

Speaker: Prof. Deming Chen, University of Illinois at Urbana-Champaign (UIUC)

Date and Time: Thursday, November 3rd, 3:00 pm

Location: EH 2430 or Zoom Link


Reconfigurable Computing uses FPGAs (Field-Programmable Gate Arrays) as an alternative to microprocessors to enable high-performance and low-energy customized computing. It is becoming a mainstream technology as evident by Intel’s $16.7B acquisition of Altera in 2015 and AMD’s $49B acquisition of Xilinx in 2022. However, challenges remain in terms of FPGA programmability, scalability, and security before reconfigurable computing makes a transformative impact in the computing world, especially in the cloud. In this talk, Dr. Chen will present some new concepts and research results that demonstrate initial promises to overcome these challenges, including shared virtual memory system for computing with FPGAs, scalable high-level synthesis for FPGA programming, and trusted execution environment with accelerators. These results are developed within the AMD-Xilinx Center of Excellence and the Hybrid-Cloud Thrust of the IBM-Illinois Discovery Accelerator Institute at UIUC.


Deming Chen is the Abel Bliss Professor of the Grainger College of Engineering at University of Illinois at Urbana-Champaign (UIUC). His current research interests include reconfigurable computing, hybrid cloud, system-level design methodologies, machine learning and acceleration, and hardware security. He has published more than 250 research papers, received ten Best Paper Awards and one ACM/SIGDA TCFPGA Hall-of-Fame Paper Award, and given more than 140 invited talks. He is an IEEE Fellow, an ACM Distinguished Speaker, and the Editor-in-Chief of ACM Transactions on Reconfigurable Technology and Systems (TRETS). He is the Director of the AMD-Xilinx Center of Excellence and the Hybrid-Cloud Thrust Co-Lead of the IBM-Illinois Discovery Accelerator Institute at UIUC. He has been involved in several startup companies, such as AutoESL and Inspirit IoT. He received his Ph.D. from the Computer Science Department of UCLA in 2005.

Hosted By: Prof. Sitao Huang


“Design Automation and Computing based on Additive Printed Electronics”

Speaker: Mehdi Tahoori

Date and Time: Thursday, September 29, 11:00 am

Location: EH 2430


Printed electronics is an emerging and fast-growing field which can be used in many demanding and emerging application domains such as wearables, smart sensors, and Internet of Things (IoT). Unlike traditional computing and electronics domain which is mostly driven by performance characteristics, printed and flexible electronics based on additive manufacturing processes are mainly associated with low fabrication costs and low energy. Printed electronics offer certain technological advantages over their silicon-based counterparts, such as mechanical flexibility, low process temperatures, maskless and additive manufacturing possibilities. Electrolyte gated transistors (EGTs) using solution-processed inorganic materials which are fully printed using inkject printers at low temperatures are very promising to provide such solutions. However, due to low device count, large device dimension and high variabilities, originated in low-cost additive manufacturing, existing design automation and computing paradigms of digital VLSI are not applicable to printed electronics. This talk covers the technology, process, modeling, fabrication, design automation, computing paradigms and security aspects of circuits and systems based on additive printed technologies.


Mehdi B. Tahoori is Professor and the Chair of Dependable Nano-Computing at Karlsruhe Institute of Technology (KIT), Germany. He received the B.S. degree in computer engineering from Sharif University of Technology, Tehran, Iran, in 2000, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, in 2002 and 2003, respectively. He is currently the deputy editor-in-chief of IEEE Design and Test Magazine. He was the editor-in-chief of Elsevier Microelectronic Reliability journal. He was the program chair of IEEE VLSI Test Symposium in (VTS) in 2021 and 2018, and General Chair of IEEE European Test Symposium (ETS) in 2019. Prof. Tahoori was a recipient of the US National Science Foundation Early Faculty Development (CAREER) Award in 2008 and European Research Council (ERC) Advanced Grant in 2022. He has received a number of best paper nominations and awards at various conferences and journals. He is currently the chair of IEEE European Test Technologies Technical Council (eTTTC). He is a fellow of the IEEE.

Hosted by: Prof. Bozorgzadeh


“DRAC: Designing RISC-V-based Accelerators for next generation Computers”

Speaker: Miquel Moreto

Date and Time: Wednesday, August 10, 11:00 am

Location: DBH 3011 OR Zoom Link


Designing RISC-V-based Accelerators for next-generation Computers (DRAC) is a 3-year project (2019-2022) funded by the ERDF Operational Program of Catalonia 2014-2020. DRAC will design, verify, implement and fabricate a high-performance general purpose processor that will incorporate different accelerators based on the RISC-V technology, with specific applications in the field of post-quantum security, genomics, and autonomous navigation. In this talk, we will provide an overview of the main achievements in the DRAC project, including the fabrication of Lagarto, the first RISC-V processor developed in Spain.


Miquel Moreto is a Ramon y Cajal Fellow at the Computer Architecture Departament (DAC) at the Universitat Politècnica de Catalunya (UPC), and he leads the High Performance Domain Specific Architectures team at the Barcelona Supercomputing Center (BSC). Miquel received his Ph.D. from UPC in 2010. After finishing the PhD, he spent 15 months at the International Computer Science Institute (ICSI), affiliated with UC Berkeley, as a Fulbright Postdoctoral Research Fellowship Holder during 2011 and 2012. In 2013, he returned to Barcelona to work on multiple European projects (RoMoL, Mont-Blanc, EPI, DeepHealth, eProcessor) and industrial projects (Arm, IBM, Lenovo). In 2019, he led the design and fabrication of Lagarto, the first processor developed in Spain based on the open source RISC-V instruction set architecture. Currently, he coordinates the DRAC project, which is promoting the Lagarto initiative with new generations of the Lagarto processor and accelerators.

Hosted By: Prof. Veidenbaum


“Securing Hardware for Designing Trustworthy Systems”

Speaker: Prabhat Mishra

Date and Time: Tuesday, August 2, 2:00 pm

Location: DBH 4011 OR Zoom Link


System-on-Chip (SoC) is the brain behind computing and communication in a wide variety of embedded systems. Reusable hardware Intellectual Property (IP) based SoC design has emerged as a pervasive design practice in the industry to dramatically reduce SoC design and verification cost while meeting aggressive time-to-market constraints. Growing reliance on these pre-verified hardware IPs, often gathered from untrusted third-party vendors, severely affects the security and trustworthiness of computing platforms. It is crucial to evaluate the integrity and trustworthiness of third-party IPs for designing trustworthy systems. In this talk, I will introduce a wide variety of hardware security vulnerabilities, design-for-security solutions, and possible attacks and countermeasures. I will briefly describe how the complementary abilities of simulation-based validation, formal verification as well as side channel analysis can be effectively utilized for comprehensive SoC security and trust validation.


Prabhat Mishra is a Professor in the Department of Computer and Information Science and Engineering and a UF Research Foundation Professor at the University of Florida. He received his Ph.D. in Computer Science from the University of California at Irvine in 2004. His research interests include embedded and cyber-physical systems, hardware security and trust, energy-aware computing, system-on-chip validation, machine learning, and quantum computing. He has published 8 books, 35 book chapters, and more than 200 research articles in premier international journals and conferences. His research has been recognized by several awards including the NSF CAREER Award, IBM Faculty Award, three best paper awards, and EDAA Outstanding Dissertation Award. He currently serves as an Associate Editor of IEEE Transactions on VLSI Systems and ACM Transactions on Embedded Computing Systems. He is an IEEE Fellow and an ACM Distinguished Scientist.

Hosted By: Prof. Nikil Dutt


“Electric Power to the People: Secure & Resilient Cyber-Physical Systems in the Age of Renewable Energy”

Speaker: Charalambos Konstantinou

Date and Time: Friday, July 15, 10:00 am

Location: EH 2430


Rapid advancements in power electronics along with the increasing penetration of distributed energy resources (DERs) are transforming the electric power grids. Furthermore, increasing types and number of loads and electric transportation are stressing the network. Overall, the power system is facing unprecedented changes in operation and control as more and diverse sources and loads are being connected to this complex cyber-physical energy system. In light of this modernization, and due to the growing number of Internet-of-things (IoT) connected controllers, and the use of communication and control interfaces, making cyber-physical energy systems resilient to high-impact, low-probability cyber-physical adverse events, such as cyber-attacks, is a major priority for power grid operations. Such incidents, if left unabated, can intensify and elicit system dynamics instabilities, eventually causing outages and system failures. In this talk, we will give an overview of the research of the Secure Next Generation Resilient Systems (SENTRY) lab ( at KAUST, presenting different methodologies, in the age of renewable energy, contributing towards building secure and resilient cyber-physical grids.


Charalambos Konstantinou is an Assistant Professor of Computer Science (CS) and Affiliate Professor of Electrical and Computer Engineering (ECE) at the Computer, Electrical and Mathematical Science and Engineering Division (CEMSE) of King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia. He is the Principal Investigator of the SENTRY Lab (Secure Next Generation Resilient Systems – and a member of the Resilient Computing and Cybersecurity Center (RC3 – at KAUST. Before joining KAUST in 2021, he was an Assistant Professor with the Center for Advanced Power Systems (CAPS) at Florida State University (FSU). His research interests are in secure, trustworthy, and resilient cyber-physical and embedded IoT systems. He is also interested in critical infrastructures security and resilience with special focus on smart grid technologies, renewable energy integration, and real-time simulation. He received a Ph.D. in Electrical Engineering from New York University (NYU), NY, in 2018, and an M.Eng. Degree in Electrical and Computer Engineering from National Technical University of Athens (NTUA), Greece, in 2012.

Hosted By: Prof. Mohammad Al Faruque


“Anti-virus hardware: Applications in Embedded, Automotive and Power Systems Security”

Speaker: Kanad Basu

Date and Time: Tuesday, June 7, 2:00 pm

Location: Zoom


Anti-virus software (AVS) tools are used to detect Malware in a system. However, software-based AVS are vulnerable to attacks. A malicious entity can exploit these vulnerabilities to subvert the AVS. Recently, hardware components such as Hardware Performance Counters (HPC) have been used for Malware detection, in the form of Anti-virus Hardware (AVH). In this talk, we will discuss HPC-based AVHs for improving embedded security and privacy. Furthermore, we will discuss the application of HPCs in security cyber physical systems (CPS), namely automotive and microgrid systems. Subsequently, we will discuss their pitfalls. Finally, we will present PREEMPT, a zero overhead, high-accuracy and low-latency technique to detect Malware by re-purposing the embedded trace buffer (ETB), a debug hardware component available in most modern processors.  PREEMPT combines these hardware-level observations with machine learning-based classifiers to preempt Malware before it can cause damage. We will conclude the talk with future research directions and challenges.


Kanad Basu received his Ph.D. from the department of Computer and Information Science and Engineering, University of Florida. His thesis was focused on improving signal observability for post-silicon validation. Post-PhD, Kanad worked in various semiconductor companies like IBM and Synopsys. During his PhD days, Kanad interned at Intel. Currently, Kanad is an Assistant Professor at the Electrical and Computer Engineering Department of the University of Texas at Dallas, where he leads the Trustworthy and Intelligent Embedded Systems (TIES) lab. Prior to this, Kanad was an Assistant Research Professor at the Electrical and Computer Engineering Department of NYU. He has authored 1 book, 2 US patents, 2 book chapters and several peer reviewed journal and conference articles. His research is supported by SRC, NSF, DARPA and Ford Motors. Kanad was awarded the ”Best Paper Award” at the International Conference on VLSI Design 2011 and an honorable mention award at the same conference in 2021. Several News agencies have covered his research including NBC Austin and CBS Dallas-Fort Worth. Kanad’s current research interests are hardware and systems security as well as Deep learning hardware.

Hosted By: Prof. Nikil Dutt


“Runtime Monitoring of Distributed Cyber-physical Systems”

Speaker: Borzoo Bonakdarpour

Date and Time: Monday, May 23, 2022 at 2:00 pm

Location: DBH 3011


We consider the problem of detecting violations of specification In the signal temporal logic over distributed continuous-time and continuous-valued signals in cyber-physical systems (CPS). We assume a partially synchronous setting, where a clock synchronization algorithm guarantees a bound on clock drifts among all signals. We introduce a novel retiming method that allows reasoning about the correctness of predicates among continuous-time signals that do not share a global view of time. The resulting problem is encoded as an SMT problem and we introduce techniques to solve the SMT encoding efficiently. Leveraging simple knowledge of physical dynamics allows further runtime reductions. We will discuss case studies on monitoring a network of autonomous ground vehicles, a network of aerial vehicles, and a water distribution system.

Biography: Borzoo Bonakdarpour is currently an Associate Professor of Computer Science at Michigan State University. His research interests include formal methods and its application in distributed systems, computer security, and cyber-physical systems. He has published more than 100 articles and papers in top journals and conferences. His work in these areas have received multiple best paper awards from highly prestigious conferences, including, RV’21, SRDS’17, SSS’14, and SIES’10. He chaired the Technical Program Committee of the SRDS’20, SSS’16, and RV’14 conferences.

Hosted By: Prof. Eli Bozorgzadeh


“Beyond Approximate Computing: Quality-Scalability for Low-Power Embedded Systems and Machine Learning”

Speaker: Younghyun Kim

Date and Time: Tuesday, May 17, 2022 at 2:00 p.m.

Location: Zoom


Approximate computing is a new paradigm to accomplish energy-efficient computing in this twilight of Moore’s law by relaxing the exactness requirement of computation results for intrinsically error-resilient applications, such as deep learning and signal processing, and producing results that are “just good enough.” It exploits that the output quality of such error-resilient applications is not fundamentally degraded even if the underlying computations are greatly approximated. This favorable energy-quality tradeoff opens up new opportunities to improve the energy efficiency of computing, and a large body of approximate computing methods for energy-efficient “data processing” have been proposed. In this talk, I will introduce approximate computing methods to accomplish “full-system energy-quality scalability.” It extends the scope of approximation from the processor to other system components including sensors, interconnects, etc., for energy-efficient “data generation” and “data transfer” to fully exploit the energy-quality tradeoffs across the entire system. I will also discuss how approximate computing can benefit the implementation of machine learning on ultra low-power embedded systems.


Prof. Younghyun Kim is an Assistant Professor in the Department of Electrical and Computer Engineering and an ECE Grainger Faculty Scholar at the University of Wisconsin-Madison, where leads the Wisconsin Embedded Systems and Computing (WISEST) Laboratory ( Prof. Kim received his B.S. degree in computer science and engineering and his Ph.D. degree in electrical engineering and computer science from Seoul National University in 2007 and 2013, respectively. He was a Postdoctoral Research Assistant at Purdue University and a visiting scholar at the University of Southern California. His current research interests include energy-efficient computing and security and privacy of the Internet-of-Things. Prof. Kim was a recipient of several awards, including the NSF Faculty Early Career Development Program (CAREER) Award, Facebook Research Award, IEEE Micro Top Pick, the EDAA Outstanding Dissertation Award, and the Design Contest Awards at the ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED). He served on the Technical Program Committees of various conferences on design automation and embedded systems, including the Design Automation Conference (DAC), ISLPED, Asia and South Pacific Design Automation Conference (ASP-DAC), International Conference on VLSI Design (VLSID), and Symposium on Applied Computing (SAC). He served as a Guest Editor for a Special Issue of VLSI Integration Journal (Elsevier).

Hosted By: Prof. Nikil Dutt


“Bridging the Gap between Algorithm and Architecture”

Speaker: Biresh Kumar Joardar

Date/Time: Thursday, September 23, 2021

Location: Zoom


Advanced computing systems have long been enablers for breakthroughs in Machine Learning (ML). However, as ML algorithms become more complex and size of the datasets increase, existing computing platforms are no longer sufficient to bridge the gap between algorithmic innovation and hardware design. For example, DNN training can be accelerated on GPUs. However, GPUs are bandwidth-bottlenecked, which can lead to sub-optimal performance. New designs such as processing-in-memory, where the memory is placed close to the computing cores, can address these limitations. However, designing these new architectures often involves optimizing multiple conflicting objectives (e.g., performance, power, thermal, reliability, etc.). Design problems are further exacerbated by the availability of different core architectures (CPUs, GPUs, NVMs, FPGAs, etc.) and interconnection technologies (e.g., TSV-based stacking, M3D, Photonics, Wireless etc.), each with a set of unique design requirements that need to be satisfied. The resulting diversity in the choice of hardware has made the design, evaluation, and testing of new architectures an increasingly challenging problem.

In this presentation, we will discuss how machine learning techniques can be used to solve complex hardware design problems (and vice versa). More specifically, we will highlight the symbiotic nature of relationship between hardware design and machine learning. We will demonstrate how machine learning techniques can be used for advancing hardware designs spanning edge devices to cloud, which will empower further advances in machine learning (i.e., Machine learning for machine learning).


Biresh Kumar Joardar is currently an NSF-sponsored Computing Innovation (postdoctoral) Fellow at the Department of Electrical and Computer Engineering at Duke University. He obtained his PhD from Washington State University in 2020. His PhD research focused on using machine learning algorithms to design and optimize heterogeneous manycore systems. As a CI Fellow, Biresh is currently working on developing reliable and energy-efficient architectures for machine learning applications. He received the ‘Outstanding Graduate Student Researcher Award’ at Washington state University in 2019. Biresh has published in numerous prestigious conferences (including ESWEEK, DATE, ICCAD) and journals (TC, TCAD and TECS). His work have been nominated for Best Paper Awards at DATE 2019 and DATE 2020. He won the best paper award in NOCS 2019. His current research interests include machine learning, manycore architectures, accelerators for deep learning, hardware reliability and security.

Host: Prof. Mohammad Al Faruque


“Intermittent Learning on Harvested Energy”

Speaker: Shahriar Nirjon

Date/Time: Thursday, September 2, 2021 from 9am-10am

Location: Zoom


Years of technological advancements have made it possible for small, portable, electronic devices of today to last for years on battery power, and last forever – when powered by harvesting energy from their surrounding environment. Unfortunately, the prolonged life of these ultra-low-power systems pose a fundamentally new problem. While the devices last for years, programs that run on them become obsolete when the nature of sensory input or the operating conditions change. The effect of continued execution of such an obsolete program can be catastrophic. For example, if a cardiac pacemaker fails to recognize an impending cardiac arrest because the patient has aged or their physiology has changed, these devices will cause more harm than any good. Hence, being able to react, adapt, and evolve is necessary for these systems to guarantee their accuracy and response time. We aimed at devising algorithms, tools, systems, and applications that will enable ultra-low-power, sensor-enabled, computing devices capable of executing complex machine learning algorithms while being powered solely by harvesting energy. Unlike common practices where a fixed classifier runs on a device, we take a fundamentally a different approach where a classifier is constructed in a manner that it can adapt and evolve as the sensory input to the system, or the application-specific requirements, such as the time, energy, and memory constraints of the system, change during the extended lifetime of the system.


Dr. Shahriar Nirjon is an assistant professor of computer science at the University of North Carolina at Chapel Hill, NC. He is interested in Embedded Intelligence – the general idea of which is to make resource-constrained real-time and embedded sensing systems capable of learning, adapting, and evolving. Dr. Nirjon builds practical cyber-physical systems that involve embedded sensors and mobile devices, mobility and connectivity, and mobile data analytics. His work has applications in the area of remote health and wellness monitoring, and mobile health. Dr. Nirjon received his Ph.D. from the University of Virginia, Charlottesville, and has won a number of awards including four Best Paper Awards at Mobile Systems, Applications, and Services (MOBISYS 2014), the Real-Time and Embedded Technology and Applications Symposium (RTAS 2012), Distributed Computing in Sensor Systems (DCOSS ’19), and Challenges in AI and Machine Learning for IoT (AIChallengeIoT ’20). Dr. Nirjon is a recipient of the NSF CAREER Award in 2021. Prior to UNC, Dr. Nirjon has worked as a Research Scientist in the Networking and Mobility Lab at the Hewlett-Packard Labs in Palo Alto, CA.

Host: Prof. Mohammad Al Faruque


“Answering Multi-Dimensional Analytical Queries under Local Differential Privacy”

Name: Tianhao Wang

Date and Time: Thursday, February 27, 2020

Location: Engineering Hall 2430


When collecting information, local differential privacy (LDP) relieves users’ privacy concerns, as it adds noise to users’ private information.  The LDP technique has been deployed by Google, Apple, and Microsoft for data collection and monitoring.  In this talk, I will share the key algorithms we developed in a Chinese e-commercial company Alibaba.  We study the problem of answering multi-dimensional queries under LDP.  Several algorithms are proposed to handle queries with different types of predicates and aggregation functions.  We built a prototype that enables different departments to collect, share, and analyze data within the company.


Tianhao Wang is a Ph.D. candidate in the department of computer science, Purdue University, advised by Prof. Ninghui Li. He received his B.Eng. degree from software school, Fudan University in 2015. His research interests include differential privacy and local differential privacy. He is a recipient of the Bilsland Dissertation Fellowship and the Emil Stefanov Memorial Fellowship. He was a member of DPSyn, which won the second-place award in the NIST PSCR Differential Privacy Synthetic Data Challenge.

“Energy-Aware Data Center Management: Monitoring Trends and Insights via Machine Learning”

Name: Hayk Shoukourian

Date and Time: Friday, November 15, 2019

Location: Donald Bren Hall 3011


The increasing demand for online resources has led to an immense energy burden on contemporary
High Performance Computing (HPC) and cloud data centers. Even though each new generation of HPC systems delivers a higher power efficiency, the growth in system density and overall performance has continuously contributed to an increase in energy consumption. The mentioned energy consumption not only converts to high operational bills and affects the environment, but also influences the stability of the underlying power grid. In fact, all these have already led some governmental organizations to reconsider data center deployment procedures with an increased demand for the renewable energy utilization and waste heat recovery.

The talk will give an overview of Leibniz Supercomputing Centre (LRZ), introduce its flagship HPC systems, discuss the high-temperature direct liquid cooling solution and the waste heat reuse. These will be followed by the recent R&D results that rely on ML technologies for forecasting various energy/power consumption relevant Key Performance Indicators at a data center building infrastructure level. The talk will highlight the applications of the developed model outlining its use in proactive management of modern data centers for tackling the above-mentioned challenges.


Dr. Hayk Shoukourian received his M.Sc. and PhD in Computer Science from Technical University of Munich in 2012 and 2015 correspondingly. He joined Leibniz Supercomputing Centre (LRZ) in 2012 and his R&D activities mainly involve efficient energy/power consumption management of the HPC data centers. In his current role, Dr. Shoukourian is responsible for adaptive modelling of interoperability between the target HPC systems and the building infrastructure of the supercomputing site. He is also a team leader for PRACE (Partnership for Advanced Computing in Europe) work package on “HPC Commissioning and Prototyping”. Since August 2018 Dr. Shoukourian is a lecturer in Computer Science at Ludwig-Maximilians-Universität München (LMU).

“Scalable Set-based Analysis for Verification of Cyber-Physical Systems”

Name: Stanley Bak

Date and Time: Tuesday, July 9, 2019

Location: Engineering Hall 2430


Cyber-physical systems combine complex physics with complex software. Although these systems offer significant potential in fields such as smart grid design, autonomous robotics and medical systems, verification of CPS designs remains challenging. Model-based design permits simulations to be used to explore potential system behaviors, but individual simulations do not provide full coverage of what the system can do. In particular, simulations cannot guarantee the absence of unsafe behaviors, which is unsettling as many CPS are safety-critical systems.

The goal of set-based analysis methods is to explore a system’s behaviors using sets of states, rather than individual states. The usual downside of this approach is that set-based analysis methods are limited in scalability, working only for very small models. This talk describes our recent process on improving the scalability of set-based reachability computation for LTI hybrid automaton models, some of which can apply to very large systems (up to one billion continuous state variables!). Lastly, we’ll discuss the significant overlap of techniques used for our scalable reachability analysis methods with set-based input/output analysis of neural networks.


Stanley Bak is a research computer scientist investigating the formal verification of cyber-physical systems. He strives to create scalable and automatic formal analysis methods for complex models with both ordinary differential equations and discrete behaviors. The ultimate goal is to make formal approaches applicable, which demands developing new theory, programming efficient tools and building experimental systems.

Stanley Bak received a Bachelor’s degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), a Master’s degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009, and a PhD from UIUC in 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. Stanley worked as a research computer scientist for the Air Force Research Laboratory (AFRL) from 2013 to 2018, both in the Information Directorate and the Aerospace Systems Directorate. Currently, he helps run Safe Sky Analytics, a small research consulting company working with the FAA and the Air Force.

“Safety Verification and Training for Learning-enabled Cyber-Physical Systems”

Name: Jyotirmoy Vinay Deshmukh

Date and Time: Tuesday, May 14, 2019 at 2:00 p.m.

Location: Donald Bren Hall 4011


With the increasing popularity of deep learning, there have been several efforts to use neural network based controllers in cyber-physical system applications. However, neural networks are equally well-known for their lack of interpretability, explainability and verifiability. This is especially an issue for safety-critical cyber-physical systems such as unmanned aerial vehicles or autonomous ground vehicles. How can we verify that a neural network based controller will always keep the system safe? We look at a new verification approach based on automatically synthesizing a barrier certificate for the system to prove that: starting from a given set of initial conditions, the system behavior can never reach an unsafe state. Barrier Certificates are essentially a generalization of inductive invariants for continuous dynamical systems, and we will show how we can use nonlinear SMT solvers to establish the barrier certificate conditions. A more intriguing challenge is whether we can actually train neural networks to obey safety constraints. We will look at a new way of reward shaping in reinforcement learning that could help achieve this goal.


Jyotirmoy V. Deshmukh (Jyo) is an assistant professor in the Department of Computer Science in the Viterbi School of Engineering at the University of Southern California in Los Angeles, USA. Before joining USC, Jyo worked as a Principal Research Engineer in Toyota Motors North America R&D. He got his Ph.D. degree from the University of Texas at Austin and was a post-doctoral fellow at the University of Pennsylvania. Jyo’s research interest is in the broad area of formal methods. Currently, Jyo is interested in using logic-based methods for machine learning, and in techniques for the analysis, design, verification and synthesis of cyber-physical systems, especially those that use AI-based perception, control and planning algorithms.

“Poly-Logarithmic Side Channel Rank Estimation via Exponential Sampling”

Name: Liron David

Date and Time: Tuesday, March 12 at 10:00 a.m.

Location: Engineering Hall 5204


Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over d subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search.

We propose ESrank, the first rank estimation algorithm that enjoys provable poly-logarithmic time- and space-complexity, which also achieves excellent practical performance. Our main idea is to use exponential sampling to drastically reduce the algorithm’s complexity. Importantly, ESrank is simple to build from scratch, and requires no algorithmic tools beyond a sorting function. After rigorously bounding the accuracy, time and space complexities, we evaluated the performance of ESrank on a real SCA data corpus, and compared it to the currently-best histogram-based algorithm. We show that ESrank gives excellent rank estimation (with roughly a 1-bit margin between lower and upper bounds), with a performance that is on-par with the Histogram algorithm: a run-time of under 1 second on a standard laptop using 6.5 MB RAM.


Liron David is a Ph.D. candidate in Electrical Engineering at Tel-Aviv University under the supervision of Prof. Avishai Wool. She received her B.Sc. degree in Computer Science and Electrical and Electronics Engineering from Tel-Aviv University and her M.Sc. degree in Electrical Engineering from Tel-Aviv University. Liron has won the Weinstein award for excellence in studies in 2017 ,the Weinstein best paper prize in 2018 and the Tel-Aviv University excellence in teaching in 2018.