Seminars at CECS

“Safety Verification and Training for Learning-enabled Cyber-Physical Systems”

Name: Jyotirmoy Vinay Deshmukh

Date and Time: Tuesday, May 14, 2019 at 2:00 p.m.

Location: Donald Bren Hall 4011

Abstract:

With the increasing popularity of deep learning, there have been several efforts to use neural network based controllers in cyber-physical system applications. However, neural networks are equally well-known for their lack of interpretability, explainability and verifiability. This is especially an issue for safety-critical cyber-physical systems such as unmanned aerial vehicles or autonomous ground vehicles. How can we verify that a neural network based controller will always keep the system safe? We look at a new verification approach based on automatically synthesizing a barrier certificate for the system to prove that: starting from a given set of initial conditions, the system behavior can never reach an unsafe state. Barrier Certificates are essentially a generalization of inductive invariants for continuous dynamical systems, and we will show how we can use nonlinear SMT solvers to establish the barrier certificate conditions. A more intriguing challenge is whether we can actually train neural networks to obey safety constraints. We will look at a new way of reward shaping in reinforcement learning that could help achieve this goal.

Biography:

Jyotirmoy V. Deshmukh (Jyo) is an assistant professor in the Department of Computer Science in the Viterbi School of Engineering at the University of Southern California in Los Angeles, USA. Before joining USC, Jyo worked as a Principal Research Engineer in Toyota Motors North America R&D. He got his Ph.D. degree from the University of Texas at Austin and was a post-doctoral fellow at the University of Pennsylvania. Jyo’s research interest is in the broad area of formal methods. Currently, Jyo is interested in using logic-based methods for machine learning, and in techniques for the analysis, design, verification and synthesis of cyber-physical systems, especially those that use AI-based perception, control and planning algorithms.

 
“Poly-Logarithmic Side Channel Rank Estimation via Exponential Sampling”

Name: Liron David

Date and Time: Tuesday, March 12 at 10:00 a.m.

Location: Engineering Hall 5204

Abstract:

Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over d subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search.

We propose ESrank, the first rank estimation algorithm that enjoys provable poly-logarithmic time- and space-complexity, which also achieves excellent practical performance. Our main idea is to use exponential sampling to drastically reduce the algorithm’s complexity. Importantly, ESrank is simple to build from scratch, and requires no algorithmic tools beyond a sorting function. After rigorously bounding the accuracy, time and space complexities, we evaluated the performance of ESrank on a real SCA data corpus, and compared it to the currently-best histogram-based algorithm. We show that ESrank gives excellent rank estimation (with roughly a 1-bit margin between lower and upper bounds), with a performance that is on-par with the Histogram algorithm: a run-time of under 1 second on a standard laptop using 6.5 MB RAM.

Biography:

Liron David is a Ph.D. candidate in Electrical Engineering at Tel-Aviv University under the supervision of Prof. Avishai Wool. She received her B.Sc. degree in Computer Science and Electrical and Electronics Engineering from Tel-Aviv University and her M.Sc. degree in Electrical Engineering from Tel-Aviv University. Liron has won the Weinstein award for excellence in studies in 2017 ,the Weinstein best paper prize in 2018 and the Tel-Aviv University excellence in teaching in 2018.

 
 Partial logic synthesis and its application to automatic generation of parallel/distributed algorithms

Name: Masahiro Fujita

Date and Time: Thursday, January 14 at 3:00 p.m.

Location:Engineering Hall 3206

Abstract: In this talk first we review the partial logic synthesis problem and its associated
algorithm based on QBF (Quantified Boolean Formula) formulation. Partial logic synthesis
is to generate appropriate sub-circuits for the missing portions in the target design so that
the entire circuit becomes equivalent to the specification which is separately given. Then
we show the problem to automatically generate distributed/parallel computing for the
given specification can be defined as partial logic synthesis problem. Taking matrix-vector
product computation as an example, we show how theoretical optimum distributed/parallel
computation can be automatically generated targeting many cores/chips which are
connected through a ring connection.

Biography: Masahiro Fujita receivedh is Ph.D. in Information Engineering from the
University of Tokyo in 1985 on his work on model checking of hardware designs by using
logic programming languages. In 1985, he joined Fujitsu as a researcher and started to
work on hardware automatic synthesis as well as formal verification methods and tools,
including enhancements of BDD/SAT-based techniques. Since March 2000, he has been a
professor at VLSI Design and Education Center of the University of Tokyo. He has been
involved in a Japanese governmental research project for dependable system designs and
has developed a formal verifier for C programs that could be used for both hardware and
embedded software designs. He has authored and co-authored 10 books, and has more
than 200 publications.

 
Accelerated Computing for Edge-Centric IoT

Name: Tulika Mitra

Date and Time: Monday, November 5 at 2:00 p.m. – 3:00 p.m.

Location: Donald Bren Hall 3011

Abstract: Internet of Things (IoT), an ever-growing network of billions of devices embedded within physical objects, is revolutionizing our daily life. The IoT devices in the edge are primarily responsible only for collecting and communicating the data to the cloud, where the computationally intensive data analytics takes place. However, the data privacy and the connectivity issues—in conjunction with the fast real-time response requirement of certain IoT applications—call for smart edge devices that should be able to support privacy-preserving, time-sensitive computation for machine intelligence on-site. In this talk, I will present the computation challenges in edge-centric IoT and introduce hardware-software co-designed approaches to overcome these challenges. I will discuss the  design of configurable, customizable accelerators that are completely software programmable and can be universally deployed across diverse domains to speed up computation and realize the edge analytics vision at ultra-low power budget. I will also demonstrate the potential to achieve low-power, real-time intelligence on the IoT edge devices via collaborating computing that engages all the on-chip heterogeneous compute elements (CPU, GPU, reconfigurable computing, and accelerators) in a synergistic fashion though sophisticated compile- and runtime strategies.

Biography: Tulika Mitra is a Professor of Computer Science at School of Computing, National University of Singapore (NUS). Her research interests span various aspects of the design automation of embedded real-time systems with particular emphasis on low-power computing, heterogeneous computing, application-specific processors, and software timing analysis/optimizations. She has authored over hundred and fifty scientific publications in leading international journals and conferences and holds multiple US patents. Her research has been recognized by best paper award and nominations in leading conferences. She is the recipient of the Indian Institute of Science Outstanding Woman Researcher Award and is an IEEE Distinguished Visitor.  Prof. Mitra currently serves as Senior Associate Editor of the ACM Transactions on Embedded Computing Systems, Deputy Editor-in-Chief of IEEE Embedded Systems Letters, and Associate Editor of IEEE Design & Test Magazine. She has served as Associate Editor of IEEE TCAD and organizing/program committee member of almost all major conferences in embedded systems, real-time systems, and electronic design automation including program chair of EMOSFT and CASES.

 
“Plasmonic Wave Computing: Concepts and Potential”

Name: Francky Catthoor

Date and Time: Wednesday, October 31 at 11:00 a.m. – 12:00 p.m.

Location: Engineering Hall 2430

Abstract: 

Several beyond CMOS computing directions are being explored given that the CMOS scaling roadmap faces increasing challenges. Any alternative is however facing strong competition from the extremely optimized CMOS roadmap. One of the potential directions which can complement CMOS in an area where it is less good is highly parallel “wave-type” computing. In this talk I will discuss some of the promising concepts and their potential but also challenges.

Biography:

Francky Catthoor received a Ph.D. in EE from the Katholieke Univ. Leuven, Belgium in 1987. Between 1987 and 2000, he has headed several research domains in the area of synthesis techniques and architectural methodologies. Since 2000 he is strongly involved in other activities at IMEC including deep submicron technology aspects, IoT and biomedical platforms, and smart photovoltaic modules, all at IMEC Leuven, Belgium. Currently he is an IMEC fellow. He is also part-time full professor at the EE department of the KULeuven. He has been associate editor for several IEEE and ACM journals and was elected IEEE fellow in 2005.

 
“Security of Additive Manufacturing: New Frontiers”

Name: Mark Yampolskiy

Date and Time: Friday, October 19 at 10:00 a.m. – 11:00 a.m.

Location: Engineering Hall 2430

Abstract: Additive Manufacturing (AM), a.k.a. 3D printing, is a rapidly growing multibillion-dollar industry that is increasingly used to manufacture functional parts, including components of safety critical systems in the aerospace, automotive, and other industries. However, reliance on the IT infrastructure and the high degree of computerization of the manufacturing machines make AM susceptible to a variety of cyber and cyber-physical attacks.

AM Security is a fairly new and highly inter-disciplinary field of research that aims to address the novel threats emerging for this manufacturing technology. This talk will first provide an introduction to the field. Focusing on AM sabotage, one of the identified threat categories, Dr. Mark Yampolskiy will then introduce emerging frontiers: sabotage of composite material parts, AM forensics, and detection of sabotage attacks via side-channel measurements. The talk will conclude with a summary of identified research gaps in the current state of the art.

Biography:

Mark Yampolskiy received Ph.D. in Computer Science from Ludwig-Maximilians University of
Munich, Germany in 2009. He currently holds an Assistant Professor position at the University
of South Alabama. Since his post-doctoral appointment at Vanderbilt University (2012-2013),
Mark Yampolskiy is performing research on Security of Cyber-Physical Systems (CPS). Mark
Yampolskiy was among the researchers who pioneered Security of Additive Manufacturing (AM,
a.k.a. 3D Printing) around 2014. AM Security remains his major research focus, and he is
currently one of the leading experts in this field. His work is predominantly associated with two
threat categories, sabotage of 3D-printed functional parts and theft of intellectual property. He
has numerous seminal publications in the field, ranging from attacks on/with AM up to novel
approaches for the detection of such attacks.

AM Security is a highly interdisciplinary field of research. In order to address this challenge,
Mark Yampolskiy actively collaborates with experts from different disciplines. His major
collaboration partners are affiliated with Lawrence Livermore National Laboratory (LLNL), Ben
Gurion University of the Negev (BGU) in Israel, Singapore University of Technology and Design
(SUTD), Auburn University (AU), and University of Tennessee at Chattanooga (UTC).

 
“SAT-based Design Debugging and Its Application to Undergraduate Circuit Experiment”

Name: Takeshi Matsumoto

Date and Time: Friday, September 7 at 11:00 a.m. – 12:00 p.m.

Location: Donald Bren Hall 3011

Abstract:

As VLSI designs are becoming larger and more complicated, designers spend a larger amount of
time for verification and debug to detect bugs in designs and avoid them. Moreover, some bugs may not be detected in the verification processes before fabrication and are only recognized by running an actual chip after fabrication. In post-silicon debugging, it is not practical that a large part of the circuit is changed for fixing bugs, since such large change requires designers to do time-consuming physical and timing design processes again. Usually, more than a half of the verification time is spent for correcting the buggy portions of the designs rather than identifying them, since debugging is much less automated than checking correctness of the designs. Thus, automating and shortening the debugging processes is now one of the most important issues in VLSI designs. In this talk, SAT-based design debugging methods in gate-level and behavior-level are introduced. In the methods, debugging consists of two processes: locating the suspicious portions in the designs and correcting them through replacements with appropriate sets of gates. In the locating process, designers try to find locations of bugs (or candidate locations) which should be the root cause of the bugs. Then, they modify logic functions at those possibly buggy locations in the correcting process. Both processes can be solved by repeatedly solving Boolean satisfiability (SAT) problems by introducing programmable logic elements, such as MUX (multiplexer) and/or LUT (look-up table), to the original designs under debugging. This talk gives the details of the theoretical aspect of those methods and experimental results on several circuits. In the last part of the talk, a trial activity to apply the debugging methods to an undergraduate experiment, where a simple 4-bit CPU is made on breadboard through six 90-minute classes, is introduced.
This activity can be seen as an application example of state-of-the-art research results to education field.

Biography:

Takeshi Matsumoto is an Associate Professor in the Department of Electronics and Information
Engineering at National Institute of Technology, Ishikawa College, Japan. He directs Integrated System Lab in the department. He received his M.S. and Ph.D. degrees in Electronic Engineering from the University of Tokyo, Japan, in 2005 and 2008, respectively. The topic of his Ph.D. thesis is equivalence checking of system level design. From 2008 to 2013, he has been a Research Associate of VLSI Design and Education Center, the University of Tokyo. His research interests include formal verification of system-level designs, automated debugging and debugging support for pre- and post-silicon circuits, education and teaching materials on electric and electronic circuits. He received IPSJ Yamashita SIG Research Award in 2012 from Information Processing Society of Japan.

 
“Efficient Processing when Intelligence Moves to the Edge”

Name: Tinoosh Mohsenin

Date and Time: Friday, July 20 at 10:00 a.m. – 11:00 a.m.

Location: Donald Bren Hall 4011

Abstract: 

Continuous collection and processing of vast amounts of data is becoming more common with the advancement of wearable sensors, Internet of Things (IoT) devices and cyber physical systems. Despite their potential significant impact, intelligent mobile technologies face a number of challenges for use in daily life. These devices usually stream raw data to cloud computers for computation which leads to massive storage, significant transmission power consumption, real time constraints and privacy/security issues, thus processing at the edge of sensors is becoming increasingly preferred. Processing these sensor-level data requires a variety of signal processing and machine learning tasks that come at the cost of high computational complexity and memory storage which is overwhelming for these light weight and battery constrained platforms.

In this talk I will present research solutions that enable efficient processing of machine learning tasks to improve energy efficiency and throughout without sacrificing application accuracy for wide deployment of embedded/edge processing. First, I present DeepMatter a scalable framework across algorithms, architectures and hardware to design an embedded Deep Neural Net (DNN) accelerator. DeepMatter takes any number of raw sensor data for variety of applications and classifies the events within 92% accuracy and consumes very low power. Next I present two programmable domain-specific manycore accelerators namely PENC and BinMACthat achieve energy and speed efficiencies comparable to application-specific custom hardware through data-level and task level parallelization as well as customization of instruction-sets per-core and near-memory computing. I will show the efficiency of DeepMatter as well as PENC and BinMAC solutions for several application domains including multi-physiological processing for seizure and stress monitoring, tongue drive assistive device, air quality monitoring andvision-based situational awareness. The solutions derived at the intersection of algorithms, architectures and implementation allow designers to rapidly prototype and deploy the next generation of sophisticated and intelligent systems for efficient edge processing in extreme environments.

Biography:

Tinoosh Mohsenin is an Assistant Professor in the Department of Computer Science and Electrical Engineering at University of Maryland Baltimore County, where she directs Energy Efficient High Performance Computing (EEHPC) Lab. She received her PhD from University of California, Davis in 2010 and M.S. degree from Rice University in 2004, both in Electrical and Computer Engineering. Prof. Mohsenin’s research focus is on designing highly accurate and energy efficient embedded processors for machine learning, signal processing and knowledge extraction techniques for autonomous systems, wearable smart health monitoring, and embedded big data computing. She has over 80 peer-reviewed journal and conference publications and is the recipient of NSF CAREER award in 2017, the best paper award in the GLSVLSI conference in 2016, and the best paper honorable award in ISCAS 2017 for developing domain-specific accelerators for biomedical, deep learning and cognitive computing. She currently leads 8 research projects in her lab which are all funded by National Science Foundation (NSF), Army Research Lab (ARL), Northrop Grumman, Boeing, Nvidia and Xilinx. She has served as associate editor in IEEE Transactions on Circuits and Systems-I (TCAS-I) and IEEE Transactions on Biomedical Circuits and Systems (TBioCAS). She was the local arrangement co-chair for the 50th IEEE
International Symposium on Circuits and Systems (ISCAS) in Baltimore. She has also served as technical program committee member of the IEEE International Solid-State Circuits Conference Student Research (ISSCC-SRP), IEEE Biomedical Circuits and Systems (BioCAS), IEEE International Symposium on Circuits and Systems (ISCAS), ACM Great Lakes Symposium on VLSI (GLSVLSI and IEEE International Symposium on Quality Electronic Design (ISQED) conferences. She also serves as secretary of IEEE P1890 on Error Correction Coding for Non-Volatile Memories.

 
“Sensors” Innovation at Intersections”

Name: Khaled Salama

Date and Time: Friday, August 3 at 10:00 a.m. – 11:00 a.m.

Location: Engineering Hall 2430

Abstract: 

Energy efficiency is a key requirement for wireless sensor nodes, biomedical implants, and wearable devices. The energy consumption of the sensor node needs to be minimized to avoid battery replacement, or even better, to enable the device to survive on energy harvested from the ambient. Capacitive sensors do not consume static power; thus, they are attractive from an energy efficiency perspective. In addition, they can be employed in a wide range of sensing applications, such as pressure, humidity, biological, and chemical sensing. We will provide a summary of various sensors developed under the KAUST sensors initiative a consortium of 9 universities (KAUST, MIT, UCLA, GATECH, MIT, UCLA, Brown University, Georgia Tech, TU Delft, Swansea University, the University of Regensburg and the Australian Institute of Marine Science (AIMS).

Biography:

Khaled N. Salama received the B.S. degree from the Department Electronics and Communications, Cairo University, Cairo, Egypt, in 1997, and the M.S. and Ph.D. degrees from the Department of Electrical Engineering, Stanford University, Stanford, CA, USA, in 2000 and 2005, respectively. He was an Assistant Professor at Rensselaer Polytechnic Institute, NY, USA, between 2005 and 2009. He joined King Abdullah University of Science and Technology (KAUST) in January 2009, where he is now a professor, and was the founding Program Chair until August 2011. His work on CMOS sensors for molecular detection has been funded by the National Institutes of Health (NIH) and the Defense Advanced Research Projects Agency (DARPA), awarded the Stanford–Berkeley Innovators Challenge Award in biological sciences and was acquired by Lumina Inc. He is the author of 200 papers and 14 US patents on low-power mixed-signal circuits for intelligent fully integrated sensors and neuromorphic circuits using memristor devices.

 
“Accurate and Stable CPU Power Modelling and Run-Time System Management”

Name: Matthew Walker

Date and Time: Friday, July 27 at 11:00 a.m. – 12:00 p.m.

Location: Donald Bren Hall 4011

Abstract: 

Modern processors must provide an ever-increasing level of performance and are therefore including
higher numbers of Heterogeneous Multi-Processing (HMP) units. Intelligent run-time control of performance and power consumption is required to extend battery-life in mobile systems, reduce energy and cooling costs in data centres, and increase peak performance while respecting thermal and power constraints. Accurate online power estimation is essential in guiding run-time power management mechanisms and energy-aware scheduling decisions.
In this talk Matt will share his experience with CPU modelling and run-time management, and present three open-source software tools for developing power models on mobile devices (http://www.powmon.ecs.soton.ac.uk), calibrating performance and energy models in the gem5 simulation framework (http://gemstone.ecs.soton.ac.uk), and developing run-time management algorithms (https://github.com/PRiME-project/PRiME-Framework/).

Biography:

Matthew J. Walker received his M.Eng. degree in Electronic Engineering (Hons) from the University of Southampton, UK, in 2013. In 2016 he worked on implementing non-volatile memory technologies in mobile systems at Intel Labs, CA, and has also completed two internships at Arm, UK, in 2015 and 2018. He is currently in his fourth year of his PhD at the University of Southampton researching CPU modelling techniques and run-time management approaches.

 
“Regional Coherence and Near Memory Acceleration in Distributed-Shared-Memory Architectures”

Name: Andreas Herkersdorf

Date and Time: Thursday, July 26 at 11:00 a.m. – 12:00 p.m.

Location: Donald Bren Hall 3011

Abstract: 

Data access latencies and bandwidth bottlenecks frequently represent major limiting factors for the computational effectiveness of many-core processor architectures. This presentation introduces two
conceptually complementary approaches to reduce the synchronization overheads for coherence
maintenance and to improve the locality between computing resources and data: Region-based cache
coherence and near memory acceleration.

A 2D array of compute tiles with multiple, heterogeneous RISC cores, two levels of caches and a tile-local
SRAM memory serves as reference processing platform. These compute tiles, various I/O tiles and a globally shared DDR SDRAM memory tile are interconnected by a meshed Network on Chip (NoC) with support for multiple quality of service levels. Overall, this processing architecture follows a distributed-
shared-memory model. The limited degree of parallelism in many embedded computing applications
also bounds the number of compute tiles possibly sharing associated data structures. Therefore, we favor region-based cache coherence (RBCC) among a limited number of compute tiles over global coherence approaches. Coherence regions are dynamically configured at runtime and comprise a number of arbitrary (adjacent or non-adjacent) compute tiles which are interconnected through regular NoC channels for the exchange of coherency protocol messages. We will show that region-based coherence allows maintaining substantially smaller coherence directories (e.g., by approx. 40% reduced in size for 16 tiles systems with up to 4 tiles per region) and shorter sharer checking latencies than global coherence. RBCC increases the locally usable intra-tile shared SRAM memories and may reduce execution times of sample video processing applications by 30% in comparison to message passing based parallelization. However, the benefits of RBCC may strongly depend on the task and data placement among tiles in the coherency region which can affect performance by up to an order of magnitude. Near memory processing using near memory accelerators (NMA) positions processing resources for specific data manipulations as close as possible to the data memory for the benefit of shortening access latencies and increasing compute efficiency.

Biography:

Andreas Herkersdorf is a professor in the Department of Electrical and Computer Engineering and also
affiliated to the Department of Informatics at Technical University of Munich (TUM). He received a Dr.
degree from ETH Zurich, Switzerland, in 1991. Between 1988 and 2003, he has been in technical and
management positions with the IBM Research Laboratory in Rüschlikon, Switzerland.
Since 2003, Dr. Herkersdorf leads the Chair of Integrated Systems at TUM. He is a senior member of the
IEEE, member of the DFG (German Research Foundation) Review Board and serves as editor for Springer and De Gruyter journals for design automation and information technology. His research interests include application-specific multi-processor architectures, IP network processing, Network on Chip and self-adaptive fault-tolerant computing.

 
“Are there any questions?”

Name: Yale Patt

Date and Time: Thursday, June 7 at 2:00 p.m. – 3:00 p.m.

Location: Engineering Hall 2430

Abstract: 

Too many seminars can be characterized as 50 minutes of lecture, followed by 5 minutes of questions. …and the first question makes it clear that nothing the speaker said in the 50 minutes was at all interesting to the audience. So, we will take the opposite approach: 5 minutes of lecture, followed by 50 minutes of questions. 5 minutes of lecture just to set the tone. Where the questions will take us is anybody’s guess. It will probably have something to do with computer architecture, historical or current, but it may venture off into something political …or at least politically correct.

Biography:

Yale Patt is a Professor of Electrical and Computer Engineering and the Ernest
Cockrell, Jr. Centennial Chair in Engineering at The University of Texas at Austin. He divides his
time between teaching the freshman and senior courses and his advanced graduate course in
microarchitecture, working with his PhD students, and consulting in the microprocessor industry.
He earned obligatory degrees from reputable universities and has received more than his share of
awards for his research and teaching. Patt has spent much of his career pursuring aggressive ILP,
out-of-order, and speculative computer architectures, such as HPSm. He has a Fellow of both the
IEEE and ACM, and a member of the National Academy of Engineering. He received his master’s
and Ph.D. degrees in Electrical Engineering from Stanford University.

 
“One does not need a Village to make a Billion Gate ASIC, only a family if Synchoros VLSI Design Style is adopted”

Name: Ahmed Hemani

Date and Time: June 6, 2018 at 4:00 p.m.

Location: Engineering Hall 2430

Committee: Nikil Dutt (Host)

Abstract:

Norman Jouppi while introducing the record long list of authors of Google’s TPU paper at ISCA 2017 remarked – one needs a village to make a chip. This talk proposes a VLSI design method that holds promise of getting the same work done with just a family. This new VLSI design method is based on two principles, one is to raise the abstraction of physical design to micro-architecture level from the present day boolean level standard cells and the second is to adopt a synchoros VLSI design style that enables composition by abutment to eliminate logic and physical synthesis for the end user.

The proposed method raises the abstraction of physical design to micro-architecture level by adopting coarse grain reconfigurable logic for computation, storage and interconnect. All variations in function, capacity, architecture and degree of parallelism are realised by clustering and configuration of the coarse grain reconfigurable cells that we call as SiLago blocks – Silicon Lego Blocks. The micro-architecture level SiLago blocks replace boolean level standard cells as the atomic building blocks of the VLSI systems. These CGRA fabrics are domain specific – inner modem, outer modem, scratchpad memory, dynamic programming, NOCs, infrastructural elements like RISC system controller, PLL/CGU, RGU, DRAM control etc. Each CGRA fabric is holistically customized for its domain, not just for computation but also for control, interconnect, address generation, local storage etc.

The CGRAs provide an architecturally regular basis for a synchoros VLSI design style. Synchoricity is derived from the Greek word “choros” for space. The way synchronous systems divide time uniformly with clock ticks and enables temporal composition, synchoros systems divide space uniformly with grids and enables spatial composition. All SiLago blocks are synchoros or ratiochoros and bring out all their interconnects to periphery on grid at right place and on right metal layer to enable composition by abutment of valid neighbours.

The net result of adopting the above two principles is that as soon as a design is refined down to micro-architecture level, the dimension and position of every single wire segment and transistor is known with certainty in the 100 million gate design. Unlike standard cells, where the position of standard cells and wires connecting them is left to physical synthesis, in synchoros design style these aspects are parametrically hardened. Global wires like power grid, clocks, resets, NOCs are not synthesised, they emerge as a result of the abutment process. This enables automation of system models down to GDSII to create custom, spatially distributed functional hardware, i.e., ASICs with just a family.

A proof of concept synthesis flow exists for transforming applications, hierarchy of algorithms, to GDSII and a path for system level to GDSII is well defined and is being implemented. System-level implies interacting applications.

Biography:

Ahmed Hemani is Professor in Electronic Systems Design at School of ICT, KTH, Kista, Sweden. His current areas of research interests are massively parallel architectures and design methods and their applications to scientific computing and autonomous embedded systems inspired by brain. In past he has contributed to high-level synthesis – his doctoral thesis was the basis for the first high-level
synthesis product introduced by Cadence called visual architect. He has also pioneered the Networks-on-chip concept and has contributed to clocking and low power architectures and design methods. He has extensively worked in industry including National Semiconductors, ABB, Ericsson, Philips Semiconductors, Newlogic. He has also been part of three start-ups.

 
“Self-aware Computing: Combining Learning and Control to Manage Complex, Dynamic Systems”

Title: “Self-aware Computing: Combining Learning and Control to Manage Complex, Dynamic Systems”

Speaker: Hank Hoffman, Associate Professor, University of Chicago

Date and Time: Friday, June 1 at 3:00 p.m.

Location: Engineering Hall 2430

Abstract: 

Modern computing systems must meet multiple—often conflicting—goals; e.g., high-performance and low energy consumption. The current state-of-practice involves ad hoc, heuristic solutions to such system management problems that offer no formally verifiable behavior and must be rewritten or redesigned wholesale as new computing platforms and constraints evolve. In this talk, I will discuss my research on building self-aware computing systems that address computing system goals and constraints in a fundamental way, starting with rigorous mathematical models and ending with real software and hardware implementations that have formally analyzable behavior and can be re-purposed to address new problems as they emerge.

These self-aware systems are distinguished by awareness of user goals and operating environment; they continuously monitor themselves and adapt their behavior and foundational models to ensure the goals are met despite the challenges of complexity (diverse hardware resources to be managed) and dynamics (unpredictable changes in input workload or resource availability). In this talk, I will describe how to build self-aware systems through a combination of control theoretic and machine learning techniques. I will then show how this combination enables new capabilities, like increasing system robustness, reducing application energy, and meeting latency requirements even with no prior knowledge of the application. Finally, these LTE protocol exploits are analyzed in the context of the recently (December 2017) published first release of the 3GPP 5G specifications (3GPP Rel. 15) for the 5G Radio Access Network (5G New Radio) and Core Network (5G System). Unfortunately, not only most protocol-aware radio jamming issues and LTE protocol exploits are still a potential threat, but there is a large number of new pre-authentication messages and new fields to already existing messages that could open the doors to further 5G-specific exploits.

Biography:

Henry Hoffmann is an Associate Professor in the Department of Computer Science at the University of Chicago. He was granted early tenure in 2018. At Chicago he leads the Self-aware computing group (or SEEC project) and conducts research on adaptive techniques for power, energy, accuracy, and performance management in computing systems. He received the DOE Early Career Award in 2015. He has spent the last 17 years working on multicore architectures and system software in both academia and industry. He completed a PhD in Electrical Engineering and Computer Science at MIT where his research on self-aware computing was named one of the ten “World Changing Ideas”  by Scientific American in December 2011. He received his SM degree in Electrical Engineering and Computer Science from MIT in 2003. As a Masters student he worked on MIT’s Raw processor, one of the first multicores. Along with other members of the Raw team, he spent several years at Tilera Corporation, a startup which commercialized the Raw architecture and created one of the first manycores (Tilera was sold for $130M in 2014). His implementation of the BDTI Communications Benchmark (OFDM) on Tilera’s 64- core TILE64 processor still has the highest certified performance of any programmable processor. In 1999, he received his BS in Mathematical Sciences with highest honors and highest distinction from UNC Chapel Hill.

 

 
“Protocol-fuzzing mobile networks with open-source tools to enhance the security of LTE and 5G mobile networks”

Title: “Protocol-fuzzing mobile networks with open-source tools to enhance the security of LTE and 5G mobile networks”

Speaker: Roger Piqueras Jover, Security Researcher at Bloomberg LP

Date and Time: Tuesday, May 15 at 10:00 a.m.

Location: Donald Bren Hall 3011

Abstract: 

The Long Term Evolution (LTE) is the latest mobile communications standard being deployed globally to provide connectivity to billions of mobile devices, from personal cell-phones to all types of critical systems, such as self-driving cars, medical appliances and industrial IoT sensors. As such, the security of this communication standard is of paramount importance. However, there is concerning inherent protocol security threats in LTE due to the large amount of unauthenticated and unprotected messages exchanged between a base station and a mobile device prior to the authentication security handshake.

Open source implementations of the LTE standards rapidly matured within the last couple of years. This, in combination with sophisticated yet low cost software radio hardware, fueled a new wave of security research that identified numerous protocol security issues in LTE that could allow an adversary to deny the service of mobile endpoints and track the location of users. This talk will summarize an ongoing effort on protocol-fuzzing LTE mobile networks using open-software tools. The protocol exploits against mobile endpoints that were discovered two years ago will be discussed as an introduction to the new systematic approach to protocol-fuzz LTE networks, introducing as well a series of new potential exploits in the uplink, against the network infrastructure and mobile devices outside of the radio range of the adversary.

Finally, these LTE protocol exploits are analyzed in the context of the recently (December 2017) published first release of the 3GPP 5G specifications (3GPP Rel. 15) for the 5G Radio Access Network (5G New Radio) and Core Network (5G System). Unfortunately, not only most protocol-aware radio jamming issues and LTE protocol exploits are still a potential threat, but there is a large number of new pre-authentication messages and new fields to already existing messages that could open the doors to further 5G-specific exploits.

Biography:

Roger Piqueras Jover is a Wireless Security Research Scientist and Security Architect at the CTO Security Architecture team of Bloomberg LP, where he leads projects on mobile/wireless security and is actively involved in hardware security, network security, machine learning and anomaly/fraud detection. Previous to Bloomberg, he spent 5 years at the AT&T Security Research Center (AT&T SRC), where he led the research area on wireless and LTE mobile network security and received numerous awards for his work.

Roger holds 17 issued patents on mobile and wireless security, has co-authored manuscripts in numerous top communications and security conferences and is the Technical Co-Chair for the ongoing IEEE 5G Summit series.

Roger holds a Dipl. Ing. from Politechnical University of Catalunya (Barcelona, Spain), a Master’s in Electrical and Computer Engineering from University of California Irvine and a Master’s/MPhil and EBD (Everything But Dissertation) in Electrical Engineering from Columbia University.

For a much more detailed biography, details on his wireless security work on LTE, 5G, LoRaWAN and other technologies, one can refer to http://rogerpiquerasjover.net/