Seminars at CECS

“Anti-virus hardware: Applications in Embedded, Automotive and Power Systems Security”

Speaker: Kanad Basu, Assistant Professor, Department of Electrical and Computer Engineering

Date and Time: Tuesday, June 7, 2:00 pm

Location: Zoom https://uci.zoom.us/j/97807443602

Abstract:

Anti-virus software (AVS) tools are used to detect Malware in a system. However, software-based AVS are vulnerable to attacks. A malicious entity can exploit these vulnerabilities to subvert the AVS. Recently, hardware components such as Hardware Performance Counters (HPC) have been used for Malware detection, in the form of Anti-virus Hardware (AVH). In this talk, we will discuss HPC-based AVHs for improving embedded security and privacy. Furthermore, we will discuss the application of HPCs in security cyber physical systems (CPS), namely automotive and microgrid systems. Subsequently, we will discuss their pitfalls. Finally, we will present PREEMPT, a zero overhead, high-accuracy and low-latency technique to detect Malware by re-purposing the embedded trace buffer (ETB), a debug hardware component available in most modern processors.  PREEMPT combines these hardware-level observations with machine learning-based classifiers to preempt Malware before it can cause damage. We will conclude the talk with future research directions and challenges.

Biography:

Kanad Basu received his Ph.D. from the department of Computer and Information Science and Engineering, University of Florida. His thesis was focused on improving signal observability for post-silicon validation. Post-PhD, Kanad worked in various semiconductor companies like IBM and Synopsys. During his PhD days, Kanad interned at Intel. Currently, Kanad is an Assistant Professor at the Electrical and Computer Engineering Department of the University of Texas at Dallas, where he leads the Trustworthy and Intelligent Embedded Systems (TIES) lab. Prior to this, Kanad was an Assistant Research Professor at the Electrical and Computer Engineering Department of NYU. He has authored 1 book, 2 US patents, 2 book chapters and several peer reviewed journal and conference articles. His research is supported by SRC, NSF, DARPA and Ford Motors. Kanad was awarded the ”Best Paper Award” at the International Conference on VLSI Design 2011 and an honorable mention award at the same conference in 2021. Several News agencies have covered his research including NBC Austin and CBS Dallas-Fort Worth. Kanad’s current research interests are hardware and systems security as well as Deep learning hardware.

Hosted By: Prof. Nikil Dutt

 

 
“Runtime Monitoring of Distributed Cyber-physical Systems”

Speaker: Borzoo Bonakdarpour, Professor, Department of Computer Science and Engineering

Date and Time: Monday, May 23, 2022 at 2:00 pm

Location: DBH 3011

Abstract: 

We consider the problem of detecting violations of specification In the signal temporal logic over distributed continuous-time and continuous-valued signals in cyber-physical systems (CPS). We assume a partially synchronous setting, where a clock synchronization algorithm guarantees a bound on clock drifts among all signals. We introduce a novel retiming method that allows reasoning about the correctness of predicates among continuous-time signals that do not share a global view of time. The resulting problem is encoded as an SMT problem and we introduce techniques to solve the SMT encoding efficiently. Leveraging simple knowledge of physical dynamics allows further runtime reductions. We will discuss case studies on monitoring a network of autonomous ground vehicles, a network of aerial vehicles, and a water distribution system.

Biography: Borzoo Bonakdarpour is currently an Associate Professor of Computer Science at Michigan State University. His research interests include formal methods and its application in distributed systems, computer security, and cyber-physical systems. He has published more than 100 articles and papers in top journals and conferences. His work in these areas have received multiple best paper awards from highly prestigious conferences, including, RV’21, SRDS’17, SSS’14, and SIES’10. He chaired the Technical Program Committee of the SRDS’20, SSS’16, and RV’14 conferences.

Hosted By: Prof. Eli Bozorgzadeh

 

 
“Beyond Approximate Computing: Quality-Scalability for Low-Power Embedded Systems and Machine Learning”

Speaker: Younghyun Kim, Assistant Professor, Department of Electrical and Computer Engineering, University of Wisconsin–Madison

Date and Time: Tuesday, May 17, 2022 at 2:00 p.m.

Location: Zoom https://uci.zoom.us/j/98632011722

Abstract:

Approximate computing is a new paradigm to accomplish energy-efficient computing in this twilight of Moore’s law by relaxing the exactness requirement of computation results for intrinsically error-resilient applications, such as deep learning and signal processing, and producing results that are “just good enough.” It exploits that the output quality of such error-resilient applications is not fundamentally degraded even if the underlying computations are greatly approximated. This favorable energy-quality tradeoff opens up new opportunities to improve the energy efficiency of computing, and a large body of approximate computing methods for energy-efficient “data processing” have been proposed. In this talk, I will introduce approximate computing methods to accomplish “full-system energy-quality scalability.” It extends the scope of approximation from the processor to other system components including sensors, interconnects, etc., for energy-efficient “data generation” and “data transfer” to fully exploit the energy-quality tradeoffs across the entire system. I will also discuss how approximate computing can benefit the implementation of machine learning on ultra low-power embedded systems.

Biography:

Prof. Younghyun Kim is an Assistant Professor in the Department of Electrical and Computer Engineering and an ECE Grainger Faculty Scholar at the University of Wisconsin-Madison, where leads the Wisconsin Embedded Systems and Computing (WISEST) Laboratory (https://wisest.ece.wisc.edu/). Prof. Kim received his B.S. degree in computer science and engineering and his Ph.D. degree in electrical engineering and computer science from Seoul National University in 2007 and 2013, respectively. He was a Postdoctoral Research Assistant at Purdue University and a visiting scholar at the University of Southern California. His current research interests include energy-efficient computing and security and privacy of the Internet-of-Things. Prof. Kim was a recipient of several awards, including the NSF Faculty Early Career Development Program (CAREER) Award, Facebook Research Award, IEEE Micro Top Pick, the EDAA Outstanding Dissertation Award, and the Design Contest Awards at the ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED). He served on the Technical Program Committees of various conferences on design automation and embedded systems, including the Design Automation Conference (DAC), ISLPED, Asia and South Pacific Design Automation Conference (ASP-DAC), International Conference on VLSI Design (VLSID), and Symposium on Applied Computing (SAC). He served as a Guest Editor for a Special Issue of VLSI Integration Journal (Elsevier).

Hosted By: Prof. Nikil Dutt

 

 
“Bridging the Gap between Algorithm and Architecture”

Speaker: Biresh Kumar Joardar

Date/Time: Thursday, September 23, 2021

Location: Zoom

Abstract: 

Advanced computing systems have long been enablers for breakthroughs in Machine Learning (ML). However, as ML algorithms become more complex and size of the datasets increase, existing computing platforms are no longer sufficient to bridge the gap between algorithmic innovation and hardware design. For example, DNN training can be accelerated on GPUs. However, GPUs are bandwidth-bottlenecked, which can lead to sub-optimal performance. New designs such as processing-in-memory, where the memory is placed close to the computing cores, can address these limitations. However, designing these new architectures often involves optimizing multiple conflicting objectives (e.g., performance, power, thermal, reliability, etc.). Design problems are further exacerbated by the availability of different core architectures (CPUs, GPUs, NVMs, FPGAs, etc.) and interconnection technologies (e.g., TSV-based stacking, M3D, Photonics, Wireless etc.), each with a set of unique design requirements that need to be satisfied. The resulting diversity in the choice of hardware has made the design, evaluation, and testing of new architectures an increasingly challenging problem.

In this presentation, we will discuss how machine learning techniques can be used to solve complex hardware design problems (and vice versa). More specifically, we will highlight the symbiotic nature of relationship between hardware design and machine learning. We will demonstrate how machine learning techniques can be used for advancing hardware designs spanning edge devices to cloud, which will empower further advances in machine learning (i.e., Machine learning for machine learning).

Biography: 

Biresh Kumar Joardar is currently an NSF-sponsored Computing Innovation (postdoctoral) Fellow at the Department of Electrical and Computer Engineering at Duke University. He obtained his PhD from Washington State University in 2020. His PhD research focused on using machine learning algorithms to design and optimize heterogeneous manycore systems. As a CI Fellow, Biresh is currently working on developing reliable and energy-efficient architectures for machine learning applications. He received the ‘Outstanding Graduate Student Researcher Award’ at Washington state University in 2019. Biresh has published in numerous prestigious conferences (including ESWEEK, DATE, ICCAD) and journals (TC, TCAD and TECS). His work have been nominated for Best Paper Awards at DATE 2019 and DATE 2020. He won the best paper award in NOCS 2019. His current research interests include machine learning, manycore architectures, accelerators for deep learning, hardware reliability and security.

Host: Prof. Mohammad Al Faruque

 

 
“Intermittent Learning on Harvested Energy”

Speaker: Shahriar Nirjon

Date/Time: Thursday, September 2, 2021 from 9am-10am

Location: Zoom

Abstract:

Years of technological advancements have made it possible for small, portable, electronic devices of today to last for years on battery power, and last forever – when powered by harvesting energy from their surrounding environment. Unfortunately, the prolonged life of these ultra-low-power systems pose a fundamentally new problem. While the devices last for years, programs that run on them become obsolete when the nature of sensory input or the operating conditions change. The effect of continued execution of such an obsolete program can be catastrophic. For example, if a cardiac pacemaker fails to recognize an impending cardiac arrest because the patient has aged or their physiology has changed, these devices will cause more harm than any good. Hence, being able to react, adapt, and evolve is necessary for these systems to guarantee their accuracy and response time. We aimed at devising algorithms, tools, systems, and applications that will enable ultra-low-power, sensor-enabled, computing devices capable of executing complex machine learning algorithms while being powered solely by harvesting energy. Unlike common practices where a fixed classifier runs on a device, we take a fundamentally a different approach where a classifier is constructed in a manner that it can adapt and evolve as the sensory input to the system, or the application-specific requirements, such as the time, energy, and memory constraints of the system, change during the extended lifetime of the system.

Biography:

Dr. Shahriar Nirjon is an assistant professor of computer science at the University of North Carolina at Chapel Hill, NC. He is interested in Embedded Intelligence – the general idea of which is to make resource-constrained real-time and embedded sensing systems capable of learning, adapting, and evolving. Dr. Nirjon builds practical cyber-physical systems that involve embedded sensors and mobile devices, mobility and connectivity, and mobile data analytics. His work has applications in the area of remote health and wellness monitoring, and mobile health. Dr. Nirjon received his Ph.D. from the University of Virginia, Charlottesville, and has won a number of awards including four Best Paper Awards at Mobile Systems, Applications, and Services (MOBISYS 2014), the Real-Time and Embedded Technology and Applications Symposium (RTAS 2012), Distributed Computing in Sensor Systems (DCOSS ’19), and Challenges in AI and Machine Learning for IoT (AIChallengeIoT ’20). Dr. Nirjon is a recipient of the NSF CAREER Award in 2021. Prior to UNC, Dr. Nirjon has worked as a Research Scientist in the Networking and Mobility Lab at the Hewlett-Packard Labs in Palo Alto, CA.

Host: Prof. Mohammad Al Faruque

 

 
“Answering Multi-Dimensional Analytical Queries under Local Differential Privacy”

Name: Tianhao Wang

Date and Time: Thursday, February 27, 2020

Location: Engineering Hall 2430

Abstract:

When collecting information, local differential privacy (LDP) relieves users’ privacy concerns, as it adds noise to users’ private information.  The LDP technique has been deployed by Google, Apple, and Microsoft for data collection and monitoring.  In this talk, I will share the key algorithms we developed in a Chinese e-commercial company Alibaba.  We study the problem of answering multi-dimensional queries under LDP.  Several algorithms are proposed to handle queries with different types of predicates and aggregation functions.  We built a prototype that enables different departments to collect, share, and analyze data within the company.

Biography:

Tianhao Wang is a Ph.D. candidate in the department of computer science, Purdue University, advised by Prof. Ninghui Li. He received his B.Eng. degree from software school, Fudan University in 2015. His research interests include differential privacy and local differential privacy. He is a recipient of the Bilsland Dissertation Fellowship and the Emil Stefanov Memorial Fellowship. He was a member of DPSyn, which won the second-place award in the NIST PSCR Differential Privacy Synthetic Data Challenge.

 
“Energy-Aware Data Center Management: Monitoring Trends and Insights via Machine Learning”

Name: Hayk Shoukourian

Date and Time: Friday, November 15, 2019

Location: Donald Bren Hall 3011

Abstract:

The increasing demand for online resources has led to an immense energy burden on contemporary
High Performance Computing (HPC) and cloud data centers. Even though each new generation of HPC systems delivers a higher power efficiency, the growth in system density and overall performance has continuously contributed to an increase in energy consumption. The mentioned energy consumption not only converts to high operational bills and affects the environment, but also influences the stability of the underlying power grid. In fact, all these have already led some governmental organizations to reconsider data center deployment procedures with an increased demand for the renewable energy utilization and waste heat recovery.

The talk will give an overview of Leibniz Supercomputing Centre (LRZ), introduce its flagship HPC systems, discuss the high-temperature direct liquid cooling solution and the waste heat reuse. These will be followed by the recent R&D results that rely on ML technologies for forecasting various energy/power consumption relevant Key Performance Indicators at a data center building infrastructure level. The talk will highlight the applications of the developed model outlining its use in proactive management of modern data centers for tackling the above-mentioned challenges.

Biography:

Dr. Hayk Shoukourian received his M.Sc. and PhD in Computer Science from Technical University of Munich in 2012 and 2015 correspondingly. He joined Leibniz Supercomputing Centre (LRZ) in 2012 and his R&D activities mainly involve efficient energy/power consumption management of the HPC data centers. In his current role, Dr. Shoukourian is responsible for adaptive modelling of interoperability between the target HPC systems and the building infrastructure of the supercomputing site. He is also a team leader for PRACE (Partnership for Advanced Computing in Europe) work package on “HPC Commissioning and Prototyping”. Since August 2018 Dr. Shoukourian is a lecturer in Computer Science at Ludwig-Maximilians-Universität München (LMU).

 
“Scalable Set-based Analysis for Verification of Cyber-Physical Systems”

Name: Stanley Bak

Date and Time: Tuesday, July 9, 2019

Location: Engineering Hall 2430

Abstract:

Cyber-physical systems combine complex physics with complex software. Although these systems offer significant potential in fields such as smart grid design, autonomous robotics and medical systems, verification of CPS designs remains challenging. Model-based design permits simulations to be used to explore potential system behaviors, but individual simulations do not provide full coverage of what the system can do. In particular, simulations cannot guarantee the absence of unsafe behaviors, which is unsettling as many CPS are safety-critical systems.

The goal of set-based analysis methods is to explore a system’s behaviors using sets of states, rather than individual states. The usual downside of this approach is that set-based analysis methods are limited in scalability, working only for very small models. This talk describes our recent process on improving the scalability of set-based reachability computation for LTI hybrid automaton models, some of which can apply to very large systems (up to one billion continuous state variables!). Lastly, we’ll discuss the significant overlap of techniques used for our scalable reachability analysis methods with set-based input/output analysis of neural networks.

Biography:

Stanley Bak is a research computer scientist investigating the formal verification of cyber-physical systems. He strives to create scalable and automatic formal analysis methods for complex models with both ordinary differential equations and discrete behaviors. The ultimate goal is to make formal approaches applicable, which demands developing new theory, programming efficient tools and building experimental systems.

Stanley Bak received a Bachelor’s degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), a Master’s degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009, and a PhD from UIUC in 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. Stanley worked as a research computer scientist for the Air Force Research Laboratory (AFRL) from 2013 to 2018, both in the Information Directorate and the Aerospace Systems Directorate. Currently, he helps run Safe Sky Analytics, a small research consulting company working with the FAA and the Air Force.

 
“Safety Verification and Training for Learning-enabled Cyber-Physical Systems”

Name: Jyotirmoy Vinay Deshmukh

Date and Time: Tuesday, May 14, 2019 at 2:00 p.m.

Location: Donald Bren Hall 4011

Abstract:

With the increasing popularity of deep learning, there have been several efforts to use neural network based controllers in cyber-physical system applications. However, neural networks are equally well-known for their lack of interpretability, explainability and verifiability. This is especially an issue for safety-critical cyber-physical systems such as unmanned aerial vehicles or autonomous ground vehicles. How can we verify that a neural network based controller will always keep the system safe? We look at a new verification approach based on automatically synthesizing a barrier certificate for the system to prove that: starting from a given set of initial conditions, the system behavior can never reach an unsafe state. Barrier Certificates are essentially a generalization of inductive invariants for continuous dynamical systems, and we will show how we can use nonlinear SMT solvers to establish the barrier certificate conditions. A more intriguing challenge is whether we can actually train neural networks to obey safety constraints. We will look at a new way of reward shaping in reinforcement learning that could help achieve this goal.

Biography:

Jyotirmoy V. Deshmukh (Jyo) is an assistant professor in the Department of Computer Science in the Viterbi School of Engineering at the University of Southern California in Los Angeles, USA. Before joining USC, Jyo worked as a Principal Research Engineer in Toyota Motors North America R&D. He got his Ph.D. degree from the University of Texas at Austin and was a post-doctoral fellow at the University of Pennsylvania. Jyo’s research interest is in the broad area of formal methods. Currently, Jyo is interested in using logic-based methods for machine learning, and in techniques for the analysis, design, verification and synthesis of cyber-physical systems, especially those that use AI-based perception, control and planning algorithms.

 
“Poly-Logarithmic Side Channel Rank Estimation via Exponential Sampling”

Name: Liron David

Date and Time: Tuesday, March 12 at 10:00 a.m.

Location: Engineering Hall 5204

Abstract:

Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over d subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search.

We propose ESrank, the first rank estimation algorithm that enjoys provable poly-logarithmic time- and space-complexity, which also achieves excellent practical performance. Our main idea is to use exponential sampling to drastically reduce the algorithm’s complexity. Importantly, ESrank is simple to build from scratch, and requires no algorithmic tools beyond a sorting function. After rigorously bounding the accuracy, time and space complexities, we evaluated the performance of ESrank on a real SCA data corpus, and compared it to the currently-best histogram-based algorithm. We show that ESrank gives excellent rank estimation (with roughly a 1-bit margin between lower and upper bounds), with a performance that is on-par with the Histogram algorithm: a run-time of under 1 second on a standard laptop using 6.5 MB RAM.

Biography:

Liron David is a Ph.D. candidate in Electrical Engineering at Tel-Aviv University under the supervision of Prof. Avishai Wool. She received her B.Sc. degree in Computer Science and Electrical and Electronics Engineering from Tel-Aviv University and her M.Sc. degree in Electrical Engineering from Tel-Aviv University. Liron has won the Weinstein award for excellence in studies in 2017 ,the Weinstein best paper prize in 2018 and the Tel-Aviv University excellence in teaching in 2018.

 
 Partial logic synthesis and its application to automatic generation of parallel/distributed algorithms

Name: Masahiro Fujita

Date and Time: Thursday, January 14 at 3:00 p.m.

Location:Engineering Hall 3206

Abstract: In this talk first we review the partial logic synthesis problem and its associated
algorithm based on QBF (Quantified Boolean Formula) formulation. Partial logic synthesis
is to generate appropriate sub-circuits for the missing portions in the target design so that
the entire circuit becomes equivalent to the specification which is separately given. Then
we show the problem to automatically generate distributed/parallel computing for the
given specification can be defined as partial logic synthesis problem. Taking matrix-vector
product computation as an example, we show how theoretical optimum distributed/parallel
computation can be automatically generated targeting many cores/chips which are
connected through a ring connection.

Biography: Masahiro Fujita receivedh is Ph.D. in Information Engineering from the
University of Tokyo in 1985 on his work on model checking of hardware designs by using
logic programming languages. In 1985, he joined Fujitsu as a researcher and started to
work on hardware automatic synthesis as well as formal verification methods and tools,
including enhancements of BDD/SAT-based techniques. Since March 2000, he has been a
professor at VLSI Design and Education Center of the University of Tokyo. He has been
involved in a Japanese governmental research project for dependable system designs and
has developed a formal verifier for C programs that could be used for both hardware and
embedded software designs. He has authored and co-authored 10 books, and has more
than 200 publications.

 
Accelerated Computing for Edge-Centric IoT

Name: Tulika Mitra

Date and Time: Monday, November 5 at 2:00 p.m. – 3:00 p.m.

Location: Donald Bren Hall 3011

Abstract: Internet of Things (IoT), an ever-growing network of billions of devices embedded within physical objects, is revolutionizing our daily life. The IoT devices in the edge are primarily responsible only for collecting and communicating the data to the cloud, where the computationally intensive data analytics takes place. However, the data privacy and the connectivity issues—in conjunction with the fast real-time response requirement of certain IoT applications—call for smart edge devices that should be able to support privacy-preserving, time-sensitive computation for machine intelligence on-site. In this talk, I will present the computation challenges in edge-centric IoT and introduce hardware-software co-designed approaches to overcome these challenges. I will discuss the  design of configurable, customizable accelerators that are completely software programmable and can be universally deployed across diverse domains to speed up computation and realize the edge analytics vision at ultra-low power budget. I will also demonstrate the potential to achieve low-power, real-time intelligence on the IoT edge devices via collaborating computing that engages all the on-chip heterogeneous compute elements (CPU, GPU, reconfigurable computing, and accelerators) in a synergistic fashion though sophisticated compile- and runtime strategies.

Biography: Tulika Mitra is a Professor of Computer Science at School of Computing, National University of Singapore (NUS). Her research interests span various aspects of the design automation of embedded real-time systems with particular emphasis on low-power computing, heterogeneous computing, application-specific processors, and software timing analysis/optimizations. She has authored over hundred and fifty scientific publications in leading international journals and conferences and holds multiple US patents. Her research has been recognized by best paper award and nominations in leading conferences. She is the recipient of the Indian Institute of Science Outstanding Woman Researcher Award and is an IEEE Distinguished Visitor.  Prof. Mitra currently serves as Senior Associate Editor of the ACM Transactions on Embedded Computing Systems, Deputy Editor-in-Chief of IEEE Embedded Systems Letters, and Associate Editor of IEEE Design & Test Magazine. She has served as Associate Editor of IEEE TCAD and organizing/program committee member of almost all major conferences in embedded systems, real-time systems, and electronic design automation including program chair of EMOSFT and CASES.

 
“Plasmonic Wave Computing: Concepts and Potential”

Name: Francky Catthoor

Date and Time: Wednesday, October 31 at 11:00 a.m. – 12:00 p.m.

Location: Engineering Hall 2430

Abstract: 

Several beyond CMOS computing directions are being explored given that the CMOS scaling roadmap faces increasing challenges. Any alternative is however facing strong competition from the extremely optimized CMOS roadmap. One of the potential directions which can complement CMOS in an area where it is less good is highly parallel “wave-type” computing. In this talk I will discuss some of the promising concepts and their potential but also challenges.

Biography:

Francky Catthoor received a Ph.D. in EE from the Katholieke Univ. Leuven, Belgium in 1987. Between 1987 and 2000, he has headed several research domains in the area of synthesis techniques and architectural methodologies. Since 2000 he is strongly involved in other activities at IMEC including deep submicron technology aspects, IoT and biomedical platforms, and smart photovoltaic modules, all at IMEC Leuven, Belgium. Currently he is an IMEC fellow. He is also part-time full professor at the EE department of the KULeuven. He has been associate editor for several IEEE and ACM journals and was elected IEEE fellow in 2005.

 
“Security of Additive Manufacturing: New Frontiers”

Name: Mark Yampolskiy

Date and Time: Friday, October 19 at 10:00 a.m. – 11:00 a.m.

Location: Engineering Hall 2430

Abstract: Additive Manufacturing (AM), a.k.a. 3D printing, is a rapidly growing multibillion-dollar industry that is increasingly used to manufacture functional parts, including components of safety critical systems in the aerospace, automotive, and other industries. However, reliance on the IT infrastructure and the high degree of computerization of the manufacturing machines make AM susceptible to a variety of cyber and cyber-physical attacks.

AM Security is a fairly new and highly inter-disciplinary field of research that aims to address the novel threats emerging for this manufacturing technology. This talk will first provide an introduction to the field. Focusing on AM sabotage, one of the identified threat categories, Dr. Mark Yampolskiy will then introduce emerging frontiers: sabotage of composite material parts, AM forensics, and detection of sabotage attacks via side-channel measurements. The talk will conclude with a summary of identified research gaps in the current state of the art.

Biography:

Mark Yampolskiy received Ph.D. in Computer Science from Ludwig-Maximilians University of
Munich, Germany in 2009. He currently holds an Assistant Professor position at the University
of South Alabama. Since his post-doctoral appointment at Vanderbilt University (2012-2013),
Mark Yampolskiy is performing research on Security of Cyber-Physical Systems (CPS). Mark
Yampolskiy was among the researchers who pioneered Security of Additive Manufacturing (AM,
a.k.a. 3D Printing) around 2014. AM Security remains his major research focus, and he is
currently one of the leading experts in this field. His work is predominantly associated with two
threat categories, sabotage of 3D-printed functional parts and theft of intellectual property. He
has numerous seminal publications in the field, ranging from attacks on/with AM up to novel
approaches for the detection of such attacks.

AM Security is a highly interdisciplinary field of research. In order to address this challenge,
Mark Yampolskiy actively collaborates with experts from different disciplines. His major
collaboration partners are affiliated with Lawrence Livermore National Laboratory (LLNL), Ben
Gurion University of the Negev (BGU) in Israel, Singapore University of Technology and Design
(SUTD), Auburn University (AU), and University of Tennessee at Chattanooga (UTC).

 
“SAT-based Design Debugging and Its Application to Undergraduate Circuit Experiment”

Name: Takeshi Matsumoto

Date and Time: Friday, September 7 at 11:00 a.m. – 12:00 p.m.

Location: Donald Bren Hall 3011

Abstract:

As VLSI designs are becoming larger and more complicated, designers spend a larger amount of
time for verification and debug to detect bugs in designs and avoid them. Moreover, some bugs may not be detected in the verification processes before fabrication and are only recognized by running an actual chip after fabrication. In post-silicon debugging, it is not practical that a large part of the circuit is changed for fixing bugs, since such large change requires designers to do time-consuming physical and timing design processes again. Usually, more than a half of the verification time is spent for correcting the buggy portions of the designs rather than identifying them, since debugging is much less automated than checking correctness of the designs. Thus, automating and shortening the debugging processes is now one of the most important issues in VLSI designs. In this talk, SAT-based design debugging methods in gate-level and behavior-level are introduced. In the methods, debugging consists of two processes: locating the suspicious portions in the designs and correcting them through replacements with appropriate sets of gates. In the locating process, designers try to find locations of bugs (or candidate locations) which should be the root cause of the bugs. Then, they modify logic functions at those possibly buggy locations in the correcting process. Both processes can be solved by repeatedly solving Boolean satisfiability (SAT) problems by introducing programmable logic elements, such as MUX (multiplexer) and/or LUT (look-up table), to the original designs under debugging. This talk gives the details of the theoretical aspect of those methods and experimental results on several circuits. In the last part of the talk, a trial activity to apply the debugging methods to an undergraduate experiment, where a simple 4-bit CPU is made on breadboard through six 90-minute classes, is introduced.
This activity can be seen as an application example of state-of-the-art research results to education field.

Biography:

Takeshi Matsumoto is an Associate Professor in the Department of Electronics and Information
Engineering at National Institute of Technology, Ishikawa College, Japan. He directs Integrated System Lab in the department. He received his M.S. and Ph.D. degrees in Electronic Engineering from the University of Tokyo, Japan, in 2005 and 2008, respectively. The topic of his Ph.D. thesis is equivalence checking of system level design. From 2008 to 2013, he has been a Research Associate of VLSI Design and Education Center, the University of Tokyo. His research interests include formal verification of system-level designs, automated debugging and debugging support for pre- and post-silicon circuits, education and teaching materials on electric and electronic circuits. He received IPSJ Yamashita SIG Research Award in 2012 from Information Processing Society of Japan.