Seminars at CECS

“Are there any questions?”

Name: Yale Patt

Date and Time: Thursday, June 7 at 2:00 p.m. – 3:00 p.m.

Location: Engineering Hall 2430

Abstract: 

Too many seminars can be characterized as 50 minutes of lecture, followed by 5 minutes of questions. …and the first question makes it clear that nothing the speaker said in the 50 minutes was at all interesting to the audience. So, we will take the opposite approach: 5 minutes of lecture, followed by 50 minutes of questions. 5 minutes of lecture just to set the tone. Where the questions will take us is anybody’s guess. It will probably have something to do with computer architecture, historical or current, but it may venture off into something political …or at least politically correct.

Biography:

Yale Patt is a Professor of Electrical and Computer Engineering and the Ernest
Cockrell, Jr. Centennial Chair in Engineering at The University of Texas at Austin. He divides his
time between teaching the freshman and senior courses and his advanced graduate course in
microarchitecture, working with his PhD students, and consulting in the microprocessor industry.
He earned obligatory degrees from reputable universities and has received more than his share of
awards for his research and teaching. Patt has spent much of his career pursuring aggressive ILP,
out-of-order, and speculative computer architectures, such as HPSm. He has a Fellow of both the
IEEE and ACM, and a member of the National Academy of Engineering. He received his master’s
and Ph.D. degrees in Electrical Engineering from Stanford University.

 
“One does not need a Village to make a Billion Gate ASIC, only a family if Synchoros VLSI Design Style is adopted”

Name: Ahmed Hemani

Date and Time: June 6, 2018 at 4:00 p.m.

Location: Engineering Hall 2430

Committee: Nikil Dutt (Host)

Abstract:

Norman Jouppi while introducing the record long list of authors of Google’s TPU paper at ISCA 2017 remarked – one needs a village to make a chip. This talk proposes a VLSI design method that holds promise of getting the same work done with just a family. This new VLSI design method is based on two principles, one is to raise the abstraction of physical design to micro-architecture level from the present day boolean level standard cells and the second is to adopt a synchoros VLSI design style that enables composition by abutment to eliminate logic and physical synthesis for the end user.

The proposed method raises the abstraction of physical design to micro-architecture level by adopting coarse grain reconfigurable logic for computation, storage and interconnect. All variations in function, capacity, architecture and degree of parallelism are realised by clustering and configuration of the coarse grain reconfigurable cells that we call as SiLago blocks – Silicon Lego Blocks. The micro-architecture level SiLago blocks replace boolean level standard cells as the atomic building blocks of the VLSI systems. These CGRA fabrics are domain specific – inner modem, outer modem, scratchpad memory, dynamic programming, NOCs, infrastructural elements like RISC system controller, PLL/CGU, RGU, DRAM control etc. Each CGRA fabric is holistically customized for its domain, not just for computation but also for control, interconnect, address generation, local storage etc.

The CGRAs provide an architecturally regular basis for a synchoros VLSI design style. Synchoricity is derived from the Greek word “choros” for space. The way synchronous systems divide time uniformly with clock ticks and enables temporal composition, synchoros systems divide space uniformly with grids and enables spatial composition. All SiLago blocks are synchoros or ratiochoros and bring out all their interconnects to periphery on grid at right place and on right metal layer to enable composition by abutment of valid neighbours.

The net result of adopting the above two principles is that as soon as a design is refined down to micro-architecture level, the dimension and position of every single wire segment and transistor is known with certainty in the 100 million gate design. Unlike standard cells, where the position of standard cells and wires connecting them is left to physical synthesis, in synchoros design style these aspects are parametrically hardened. Global wires like power grid, clocks, resets, NOCs are not synthesised, they emerge as a result of the abutment process. This enables automation of system models down to GDSII to create custom, spatially distributed functional hardware, i.e., ASICs with just a family.

A proof of concept synthesis flow exists for transforming applications, hierarchy of algorithms, to GDSII and a path for system level to GDSII is well defined and is being implemented. System-level implies interacting applications.

Biography:

Ahmed Hemani is Professor in Electronic Systems Design at School of ICT, KTH, Kista, Sweden. His current areas of research interests are massively parallel architectures and design methods and their applications to scientific computing and autonomous embedded systems inspired by brain. In past he has contributed to high-level synthesis – his doctoral thesis was the basis for the first high-level
synthesis product introduced by Cadence called visual architect. He has also pioneered the Networks-on-chip concept and has contributed to clocking and low power architectures and design methods. He has extensively worked in industry including National Semiconductors, ABB, Ericsson, Philips Semiconductors, Newlogic. He has also been part of three start-ups.

 
“Self-aware Computing: Combining Learning and Control to Manage Complex, Dynamic Systems”

Title: “Self-aware Computing: Combining Learning and Control to Manage Complex, Dynamic Systems”

Speaker: Hank Hoffman, Associate Professor, University of Chicago

Date and Time: Friday, June 1 at 3:00 p.m.

Location: Engineering Hall 2430

Abstract: 

Modern computing systems must meet multiple—often conflicting—goals; e.g., high-performance and low energy consumption. The current state-of-practice involves ad hoc, heuristic solutions to such system management problems that offer no formally verifiable behavior and must be rewritten or redesigned wholesale as new computing platforms and constraints evolve. In this talk, I will discuss my research on building self-aware computing systems that address computing system goals and constraints in a fundamental way, starting with rigorous mathematical models and ending with real software and hardware implementations that have formally analyzable behavior and can be re-purposed to address new problems as they emerge.

These self-aware systems are distinguished by awareness of user goals and operating environment; they continuously monitor themselves and adapt their behavior and foundational models to ensure the goals are met despite the challenges of complexity (diverse hardware resources to be managed) and dynamics (unpredictable changes in input workload or resource availability). In this talk, I will describe how to build self-aware systems through a combination of control theoretic and machine learning techniques. I will then show how this combination enables new capabilities, like increasing system robustness, reducing application energy, and meeting latency requirements even with no prior knowledge of the application. Finally, these LTE protocol exploits are analyzed in the context of the recently (December 2017) published first release of the 3GPP 5G specifications (3GPP Rel. 15) for the 5G Radio Access Network (5G New Radio) and Core Network (5G System). Unfortunately, not only most protocol-aware radio jamming issues and LTE protocol exploits are still a potential threat, but there is a large number of new pre-authentication messages and new fields to already existing messages that could open the doors to further 5G-specific exploits.

Biography:

Henry Hoffmann is an Associate Professor in the Department of Computer Science at the University of Chicago. He was granted early tenure in 2018. At Chicago he leads the Self-aware computing group (or SEEC project) and conducts research on adaptive techniques for power, energy, accuracy, and performance management in computing systems. He received the DOE Early Career Award in 2015. He has spent the last 17 years working on multicore architectures and system software in both academia and industry. He completed a PhD in Electrical Engineering and Computer Science at MIT where his research on self-aware computing was named one of the ten “World Changing Ideas”  by Scientific American in December 2011. He received his SM degree in Electrical Engineering and Computer Science from MIT in 2003. As a Masters student he worked on MIT’s Raw processor, one of the first multicores. Along with other members of the Raw team, he spent several years at Tilera Corporation, a startup which commercialized the Raw architecture and created one of the first manycores (Tilera was sold for $130M in 2014). His implementation of the BDTI Communications Benchmark (OFDM) on Tilera’s 64- core TILE64 processor still has the highest certified performance of any programmable processor. In 1999, he received his BS in Mathematical Sciences with highest honors and highest distinction from UNC Chapel Hill.

 

 
“Protocol-fuzzing mobile networks with open-source tools to enhance the security of LTE and 5G mobile networks”

Title: “Protocol-fuzzing mobile networks with open-source tools to enhance the security of LTE and 5G mobile networks”

Speaker: Roger Piqueras Jover, Security Researcher at Bloomberg LP

Date and Time: Tuesday, May 15 at 10:00 a.m.

Location: Donald Bren Hall 3011

Abstract: 

The Long Term Evolution (LTE) is the latest mobile communications standard being deployed globally to provide connectivity to billions of mobile devices, from personal cell-phones to all types of critical systems, such as self-driving cars, medical appliances and industrial IoT sensors. As such, the security of this communication standard is of paramount importance. However, there is concerning inherent protocol security threats in LTE due to the large amount of unauthenticated and unprotected messages exchanged between a base station and a mobile device prior to the authentication security handshake.

Open source implementations of the LTE standards rapidly matured within the last couple of years. This, in combination with sophisticated yet low cost software radio hardware, fueled a new wave of security research that identified numerous protocol security issues in LTE that could allow an adversary to deny the service of mobile endpoints and track the location of users. This talk will summarize an ongoing effort on protocol-fuzzing LTE mobile networks using open-software tools. The protocol exploits against mobile endpoints that were discovered two years ago will be discussed as an introduction to the new systematic approach to protocol-fuzz LTE networks, introducing as well a series of new potential exploits in the uplink, against the network infrastructure and mobile devices outside of the radio range of the adversary.

Finally, these LTE protocol exploits are analyzed in the context of the recently (December 2017) published first release of the 3GPP 5G specifications (3GPP Rel. 15) for the 5G Radio Access Network (5G New Radio) and Core Network (5G System). Unfortunately, not only most protocol-aware radio jamming issues and LTE protocol exploits are still a potential threat, but there is a large number of new pre-authentication messages and new fields to already existing messages that could open the doors to further 5G-specific exploits.

Biography:

Roger Piqueras Jover is a Wireless Security Research Scientist and Security Architect at the CTO Security Architecture team of Bloomberg LP, where he leads projects on mobile/wireless security and is actively involved in hardware security, network security, machine learning and anomaly/fraud detection. Previous to Bloomberg, he spent 5 years at the AT&T Security Research Center (AT&T SRC), where he led the research area on wireless and LTE mobile network security and received numerous awards for his work.

Roger holds 17 issued patents on mobile and wireless security, has co-authored manuscripts in numerous top communications and security conferences and is the Technical Co-Chair for the ongoing IEEE 5G Summit series.

Roger holds a Dipl. Ing. from Politechnical University of Catalunya (Barcelona, Spain), a Master’s in Electrical and Computer Engineering from University of California Irvine and a Master’s/MPhil and EBD (Everything But Dissertation) in Electrical Engineering from Columbia University.

For a much more detailed biography, details on his wireless security work on LTE, 5G, LoRaWAN and other technologies, one can refer to http://rogerpiquerasjover.net/

 
“ASSISTECH: Assistive Technology for the Visually Impaired”

Title: “ASSISTECH: Assistive Technology for the Visually Impaired”

Speaker: Professor Mayandiambalam Balakrishnan, CSE, Indian Institute of Technology, Delhi, India

Date and Time: Monday, March 26 at 3:00PM-4:00PM

Location: Donald Bren Hall 4011

Abstract: 

More than ten years back we started on the ASSISTECH journey – a laboratory dedicated to finding
technology solutions for the mobility and education of the visually impaired. It has been a very challenging but deeply fulfilling journey. Globally there are many technology solutions available for addressing the mobility as well as education needs of the visually impaired. Unfortunately they fail to address the challenges of mobility and education in India and other developing countries and even if they do they are completely unaffordable. In this talk I would tell the story of development and dissemination of devices and technologies like SmartCane, OnBoard and Tactile Diagrams. These started in the ASSISTECH laboratory as inter-disciplinary projects and are today slowly finding their way to the users.

Biography:

M Balakrishnan has a B.Tech.(EEE) from BITS Pilani and PhD(EE) from IIT Delhi. He has been
involved in teaching and research in EDA, VLSI and Embedded systems for more than three decades. He has published over 120 papers in leading journals and conferences, supervised 13 PhD students and over 180 B.Tech. and M.Tech. student projects. He has held visiting positions in Canada, US and Germany. He has founded the ASSISTECH laboratory at IIT Delhi which is engaged in finding affordable technology based solutions for education and mobility of visually impaired. He is a recipient of two national awards for his work in the disability space. He has been HoD(CSE), Dean(Post Graduate Studies & Research), Deputy Director(Faculty) and is currently Deputy Director(Strategy & Planning) at IIT Delhi.

 
”Cost and Power Efficient Deep Neural Network Acceleration”

Title: “Cost and Power Efficient Deep Neural Network Acceleration”

Speaker: Associate Prof. Jongeun Lee, School of ECE, UNIST, Ulsan, Korea

Date and Time: Monday, February 26, 2018 at 11:00am – 12:00pm

Location: EH 2430

Abstract:

Deep neural networks prove to be a right direction after all, but the enormous computational complexity calls for new research into novel ways of making efficient implementations of the connectionist model for small, mobile devices.

In this talk I will discuss two directions, one based on FPGA and the other based on a new computing paradigm called stochastic computing (SC).  FPGA is a very capable device housing thousands of processing elements in a single chip with low power consumption.  However, FPGAs are very different from GPUs, and making it easy to program FPGAs for deep neural networks is one of the key challenges facing FPGAs today.  I will present a design space exploration approach to help find the best design for a given FPGA and for a specific deep neural network model.

Stochastic computing (SC) was first introduced in the 60’s when computing devices were not very reliable.  Unlike conventional binary number representations, stochastic computing uses a bitstream to represent a number, and is therefore inherently more error-resilient.

In addition, SC shares some good qualities with analog computing such as low power dissipation and low cost while strictly operating in the digital domain.  At the same time, error fluctuation and conversion overhead are the key challenges facing SC today.  I will present a new SC architecture for deep neural networks that is much more accurate and efficient than previous SC solutions.

Biography:

Jongeun Lee received his B.S. and M.S. in Electrical Engineering, and his Ph.D. in electrical engineering and computer science all from Seoul National University, Korea. In 2009 he joined UNIST (Ulsan National Institute of Science and Technology), Ulsan, Korea, where he is now an Associate Professor of Electrical and Computer Engineering. Prior to joining UNIST, he worked as a postdoctoral research associate at Arizona State University, and previously worked for Samsung Electronics. His current research interests include reconfigurable architectures, compilers, stochastic computing, and deep neural networks.

 
“Near-Threshold Computing: The bottom floor for Energy in IoT devices and its Vanishing Design Noise Margins”

Title: “Near-Threshold Computing: The bottom floor for Energy in IoT devices and its Vanishing Design Noise Margins”

Speaker: Prof. Sergio Bampi, Informatics Institute – Microelectronics Design – Federal University of Rio Grande do Sul, Brazil

Date and Time: Thursday, February 15, 2018 at 2:00PM-3:00PM

Location: ICS 432

Abstract:

Promising opportunities for research on heterogeneous Internet-of- Things (IoT) devices lie at the energy-bottom floor, where CMOS is still an unbeatable, flexible technology to make the internet of “everything” possible and cost-effective. Near-threshold computing (NTC) in CMOS is a promising alternative for any application which can tolerate or benefit from very wide voltage-frequency scaling (VFS). The digital blocks of devices may operate at very different power-performance modes, from sub-MHz to peaks of hundreds of MHz, which requires CMOS design libraries targeted for NTC. The analog and RF content on IoT chips will be even more crucial, from a power-savings standpoint. In IoT it is best to avoid using costlier CMOS, and yet closer to an end, two-dimensional transistor and IC scaling. The nano-power range which is achievable in deca-nanometer CMOS at near-VT requires very specific logic design techniques to be applied in digital CMOS. This talk addresses a method to design CMOS digital circuits for a wide dynamic range of VFS, and targets near-threshold for best energy-efficiency. Our work on 65nm CMOS has demonstrated 63X to 77X energy/operation savings for applications that tolerate ultra-wide frequency scaling (from hundreds of KHz to 1GHz) in their system operating modes. The results in CMOS were obtained using the minimal cycle time achievable at each supply voltage, down to very low 200 mV supplies. The strategy for transistor sizing in digital cells and static noise margin maximization will be addressed in particular. The seminar seeks to stimulate system-level and circuit-level design approaches for IoT nodes, as the presenter is available to discuss digital, mixed-signal, and even RF aspects of CMOS systems-on- IoT-devices.

Biography:

Sergio Bampi received the B.Sc in Electronics Engineering and the B.Sc. in Physics from the Federal Univ. of Rio Grande do Sul (UFRGS, 1979), and the M.Sc. and Ph.D. degrees in EE from Stanford University (USA) in 1986. Full professor in the Digital Systems and Microelectronics design fields at the Informatics Institute, member of the faculty since 1986. He is a member of the PPGC Computing Graduate Program since 1988, and of the PGMICRO since its start in 2002. He served as Graduate Program Coordinator (2003-2007), head of research group and projects, technical director of the Microelectronics Center CEITEC (2005-2008) and is the past President of the FAPERGS Research Funding Foundation and of the SBMICRO Society (2002-2006). He is a former member of HP Inc. technical staff, and a visiting research faculty at Stanford University (1998-99). His research interests are in the area of IC design, nano-CMOS devices, mixed signal and RF CMOS design, ultra-low power digital design, dedicated complex algorithms, architectures, and ASICs for image and video processing. He has co-authored more than 360 papers in these fields and in MOS devices and EDA. He is a senior member of IEEE. He was Technical Program Chair of IEEE SBCCI Symposium (1997, 2005), SBMICRO (1989, 1995), IEEE LASCAS (2013), VARI 2016 Conferences and Workshops.

 
“Digital Early Warning Scoring – A Cognitive IoT based approach”

Title: “Digital Early Warning Scoring – A Cognitive IoT based approach”

Speaker: Prof. Pasi Liljeberg, University of Turku

Date and Time: Thursday, January 25, 2018 at 2:00PM-3:00PM

Location: Engineering Hall 2430

Abstract:

In healthcare, effective monitoring of patients plays a key role in detecting health deterioration early enough. Many signs of deterioration exist as early as 24 hours prior having a serious impact on the health of a person. As hospitalization times have to be minimized, in-home or remote early warning systems can fill the gap by allowing in-home care while having the potentially problematic conditions and their signs under surveillance and control.

Early warning score (EWS) is an approach to detect the deterioration of a patient. It is based on a fact that there are several changes in the physiological parameters prior a clinical deterioration of a patient. Currently, EWS procedure is mostly used for in-hospital clinical cases and is performed in a manual paper-based fashion. However, it is possible to build an automated EWS health monitoring system to intelligently monitor vital signs and prevent health deterioration for in-home and hospitalized patients using Internet-of-Things technologies.

Biography:

Pasi Liljeberg received the MSc and PhD degrees in electronics and information technology from the University of Turku, Turku, Finland, in 1999 and 2005, respectively. He received Adjunct professorship in embedded computing architectures in 2010. Currently he is working as full professor in University of  Turku in the field of Embedded Systems and Internet of Things. At the moment his research is focused on biomedical engineering and health technology. In that context he has established and leading the Internet-of-Things for Healthcare, IoT4Health, ( http://iot4health.utu.fi [1] ) research group. Liljeberg is the author of more than 270 peer-reviewed publications.

 
“Hierarchial Non-Intrusive In-Situ Requirements Monitoring for Embedded Systems”

Title:“Hierarchial Non-Intrusive In-Situ Requirements Monitoring for Embedded Systems”

Speaker: PhD Candidate Minjun Seo, University of Arizona, Tuscon

Date and Time: Thursday, December 21, 2017 at 2:00PM-3:00PM

Location:  Engineering Hall 2430

Abstract:

Accounting for all operating conditions of a system at the design stage is typically infeasible for complex systems. In-situ runtime monitoring and verification can enable a system to introspectively ensure the system is operating correctly in the presence of dynamic environment, to rapidly detect failures, and to provide detailed execution traces to find the root cause thereof. Two key challenges faced in using in-situ runtime verification for embedded systems include 1) efficiently defining and automatically constructing a requirements model for embedded system software and 2) minimizing the runtime overhead of observing and verifying the runtime execution adheres to the requirements model. In this talk, we present a methodology to construct a hierarchical runtime monitoring graph from system requirements specified us- ing multiple UML sequence diagrams, which are already commonly used in software development. We further present the design of on-chip hardware that nonintrusively monitors the system at runtime to ensure the execution matches the requirements model. We evaluate the proposed methodology using a case study of a fail-safe autonomous vehicle subsystem and analyze the relationship between event coverage, detection rate, and hardware requirements.

Biography:

Minjun Seo is a Ph.D. Candidate in the Department of Electrical and Computer Engineering at the University of Arizona.  His current research focuses on efficient specification and implementation of in-situ requirements monitoring of embedded systems. His research interests also include design tools and optimization methods supporting efficient verification hardware, FPGAs, and HW/SW co-design. Mr. Seo received a B.S. in Computer Science and Engineering in 2006 and a M.S. in Computer Science in 2008 from Kyungnam University.

 
“I/O Optimization for Mobile Flash Storage”

Title: “I/O Optimization for Mobile Flash Storage”

Speaker: Prof. Chun Jason Xue, City University of Hong Kong

Date and Time: Tuesday, December 5, 2017 at 10:00AM-11:00AM

Location: DBH 3011

Abstract:

NAND flash memory is the primary choice of data storage for mobile devices due to its high performance, shock resistance and low power consumption. However, compared to high-end solid-state drives, mobile flash storage does not have the luxury of sophisticated hardware and firmware features because of the resource-constraint limits. Instead, mobile flash storage often equipped with scarce built-in RAM, slow embedded processors and low-cost flash memories, all of which make it a challenge to apply traditional techniques for I/O performance improvements. This talk presents a couple of work that we have done recently to optimize I/O for Mobile Flash Storage. First, this talk will introduce a novel I/O scheduling approach to improve demand-based page-level mapping cache performance. This technique generates mapping cache friendly I/O workloads by strengthening I/O locality at host I/O scheduler. Both temporal locality and spatial localities are taken into consideration for I/O scheduling. Second, this talk will present a lightweight data compression technique at the flash controller to reduce write pressure on mobile flash storage. It first characterizes data compressibility based on real smartphones, and the analysis shows that write traffics bound to mobile storage volumes are highly compressible. This technique is the first to investigate firmware-based data compression for mobile flash storage without adding extra data compression hardware. Experimental results demonstrate the proposed techniques outperform state-of-the-art schemes in terms of I/O latency and flash memory lifespan.

Biography:

Dr. Chun Jason Xue is an Associate Professor at City University of Hong Kong Computer Science Department. His research interests include non-volatile memories, embedded and real time systems. He is currently Associate Editor for ACM Transaction on Embedded Computing Systems, Associate Editor for ACM Transaction on CPS, Associated Editor for ACM Transaction on Storage. He was the TPC co-chair for LCTES 2015, TPC co-char for ISVLSI 2016, and has served as TPC members in premiere conferences such as DAC, DATE, RTSS, RTAS, CODES, EMSOFT and ISLPED.

 
“Network Resource Management in Cyber-Physical Systems”

Title: “Network Resource Management in Cyber-Physical Systems”

Speaker: Dr. Xiabo Sharon Hu, University of Notre Dame, USA

Date and Time: Friday, November 17, 2017 at 2:00PM-3:00PM

Location: ICS 432

Abstract:

A cyber-physical system (CPS) is a system built from close integration of computational fabrics and physical components. Examples of such systems include avionic systems, industrial control and civil infrastructure monitoring. In a CPS, sensors and actuators are used to monitor and control the physical components while the computational fabrics determine the control values for the actuators based on the sensed data. All CPSs require timely delivery of sensed data from sensors to the computing fabrics and control signals from the computing fabrics to actuators. Managing the limited resources (e.g., computation power and communication bandwidth) to meet the timing requirements in a CPS is a challenging task. Even more challenging is that CPSs should degrade gracefully in the presence of various external disturbances such as failure in critical civil infrastructures and malicious attacks.

In this talk, I first give a general introduction of WNCSs and the challenges that they present to network resource management. In particular, I will discuss the complications due to external disturbances and the need for dynamic data-link layer scheduling. I then highlight our recent work that aims at tackling this challenge. Our work balances the scheduling effort between a gateway (or access points) and the rest of the nodes in a network. It paves the way towards decentralized network resource management in
order to achieve scalability. Experimental implementation on a wireless test bed further validates the applicability of our proposed research. I will end the talk outlining our on-going effort in this exciting and growing area of research.

Xiaobo Sharon Hu is a professor in the department of Computer Science and Engineering at the University of Notre Dame, Notre Dame, Indiana, USA. Her research interests include low-power system design, real-time embedded systems, circuit and architecture design with emerging technologies, and hardware/software co-design. She has published more than 300 papers in these areas, and received the Best Paper Award from the Design Automation Conference in 2001 and from the IEEE Symposium
on Nanoscale Architectures in 2009. She is the General Chair of Design Automation Conference in 2018. She is an Associate Editor for ACM Transactions on Cyber- Physical Systems, and also served as Associate Editor for IEEE Transactions on VLSI, etc. Sharon Hu is a Fellow of the IEEE.

Biography:

Xiaobo Sharon Hu is a professor in the department of Computer Science and Engineering at the University of Notre Dame, Notre Dame, Indiana, USA. She also holds a joint appointment in the department of Electrical Engineering at the same university. She received B.S. degree from Tianjin University, China, M.S. degree from Polytechnic University of New York, and Ph.D. degree from Purdue University. She worked for General Motors Research Labs for almost 4 years before she started her academic career. Between 1993 and 1996, she was an assitant professor in the department of Electrical and Computer Engineering at Western Michigan University, Kalamazoo, Michigan, USA.

Her research interests include analysis and design of low power, real-time, and embedded systems, computing with emerging technologies, and computational medicine. She has published more than 200 referred papers in these areas and received numerous research grants from both the U.S. government agencies and private industry. She received the CAREER award from U.S. National Science Foundation in 1997. She received the Best Paper Award from the ACM/IEEE Design Automation Conference in 2001 and from the IEEE Symposium on Nanoscale Architectures in 2009. Another paper of hers was named one of “The Most Influential Papers of 10 Years Design, Automation, and Test in Europe Conference (DATE)”.

Sharon is currently Associate Editor for ACM Transactions on Embedded Computing and Co-Chair of the Technical Program Committee of 2014 Design Automation Conference (DAC). She also served as Associate Editor for IEEE Transactions on VLSI and ACM Transactions on Design Automation of Electronic Systems. She has served as guest editors for several different journals/magazines such as the IEEE Computer Magazine and IEEE Transactions on Industrial Informatics. She was the Technical Program Co-Chair of the 9th International Symposium on Hardware/Software Codesign (CODES’2001) and the General Co-Chair of the same conference in 2002. She also served or is serving on the program committee of a number of conferences such as Design Automation Conference (DAC), International Conference on Computer-Aided Design (ICCAD), Design, Automation and Test in Europe Conference )DATE), IEEE Real-Time Systems Symposium (RTSS), and IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS), etc.

At the University of Notre Dame, Sharon had been the Director of Graduate Studies for 10 years in the department of Computer Science and Engineering, and was Senior Assistant Provost for ND International between 2012–2013. Currently, she is the Associate Dean for Professional Development at the Graduate School.

 

 
“ThirdEye: Visual Assist for Grocery Shopping”

Title: “ThirdEye: Visual Assist for Grocery Shopping”

Speaker: Prof. Vijaykrishnan Narayanan, Pennsylvania State University, USA

Date and Time: Wednesday, November 15, 2017 at 11:00AM-12:00PM

Location: 2430 Engineering Hall

Abstract:

Shopping is widely considered as a relaxing leisure activity. However, grocery shopping can be a frustrating experience for those with visual impairment. While getting to a grocery shop itself is not as much of a challenge for them, locating and picking the items in the grocery shelf becomes a task as challenging as picking a needle from the haystack. Imagine picking up five items for your dinner recipe from a typical grocery store in the US that carries around 35,000 unique items and can have more than 30 aisles spanning 45,000 square meters. This talk will showcase synergistic advances in algorithms, architectures and interface design for assisting those with visual impairment to do shopping. We will specifically focus on new non-boolean hardware approaches to significantly impact the energy-efficiency of the overall system.

Biography:

Vijay Narayanan is a Distinguished Professor of Computer Science and Engineering and Electrical Engineering at The Pennsylvania State University. He is the director of the NSF Expeditions-in-Computing Program on Visual Cortex on Silicon and a thrust leader for the DARPA-MARCO LEAST Center. He has published more than 400 papers and won several awards in recognition of his research in power-aware systems, embedded systems and computer architecture. He is a fellow of IEEE and ACM.

 

 
“Detecting Hardware Trojans Hidden in Unspecified Design Functionality”

Title: “Detecting Hardware Trojans Hidden in Unspecified Design Functionality”

Speaker: Dr. Nicole Fern, University of California, Santa Barbara

Date and Time: Thursday, November 16th, 2017 at 2:00PM-3:00PM

Location: Donald Bren Hall 3011

Abstract:

Traditional verification methods and metrics attempt to answer the question: does my design correctly perform the intended functionality?  This talk will look at hardware verification from a security perspective, which demands the verification effort answer an additional question: does my design perform malicious functionality in addition to the intended functionality?  The talk will motivate through examples why Hardware Trojans modifying only unspecified design functionality are both powerful and stealthy.  RTL don’t cares and idle cycles in on-chip bus protocols are two examples of unspecified functionality vulnerable to malicious modification that this talk will explore in depth.  This talk will also detail how to formulate the Trojan detection problem as a satisfiability problem in order to leverage existing formal verification tools to highlight Trojans hidden in unspecified functionality.

Biography: 

Nicole Fern received her undergraduate degree in Electrical Engineering from The Cooper Union for the Advancement of Science and Art and her PhD degree in the ECE department at UC Santa Barbara under the advisement of Professor Tim Cheng.  She is now a post-doc at UC Santa Barbara.  Her thesis work focused on developing techniques to verify the absence of Hardware Trojans in unspecified design functionality.  Her current research interests include investigating security issues in emerging memory technologies and at the hardware/software boundary.

 

 

 
“Application Mapping Methodologies for NoC-Based MPSOCs”

Title: “Application Mapping Methodologies for NoC-Based MPSOCs”

Speaker: Jürgen Teich, Friedrich-Alexander Universität Erlangen-Nürnberg (FAU)

Date and Time: Tuesday, November 14, 2017 at 3:00PM-4:00PM

Location: DBH 4011

Abstract: 

In this talk, we give an overview of novel techniques for systematically mapping applications to NoC-based multi-core architectures
(MPSoCs). Complex applications requiring heterogenous processing resources are often described by task graphs
with data dependencies. Here, the nodes represent actors or tasks which are typically activated periodically based on the
availability of data. One prominent domain of applications fitting this model is stream processing. Here, it is often important to guarantee
either bandwidth or execution time requirements. But more recently, also security, energy and reliability aspects impose
constraints on the mapping of the tasks as well as their communication to cores, respectively routes in the underlying NoC.

Concerning mapping methodologies, we first present a class of algorithms that perform “Self-Embedding”. The idea is here that
a source node issues a request to find appropriate resources to embed its sucessor tasks, and so on.
The next class of techniques introduced is called “Hybrid Application Mapping (HAM)”. Here, a careful analysis and
characterization of symmetric mappings by constellations of cores and routes is explored in a static (compile-time)
phase called “Design Space Exploration (DSE)”. At run-time, the operating system then only needs to search within such
pre-analysed constellations for finding a concrete mapping that will satisfy the given non-functional constraints by construction.
We present ideas of how timing constraints may be statically analysed in case of compositional MPSoC architectures such that
deadlines or throughput requirements will be automatically met for streaming applications.
Finally, we conclude with a discussion on resource constellations that may satisfy certain security requirements on an MPSoC.

Biography:

Jürgen Teich is with Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany, where he is directing
the Chair for Hardware/Software Co-Design since 2003. He received the M.S. degree (Dipl.-Ing.; with honors)
from the University of Kaiserslautern, Germany in 1989 and the Ph.D. degree (Dr.-Ing.; summa cum laude)
from the University of Saarland, Saarbruecken, Germany, in 1993. In 1994, he joined the DSP design group of Prof. E. A. Lee
in the Department of Electrical Engineering and Computer Sciences (EECS), University of California at
Berkeley (PostDoc). From 1995 to 1998, he held a position at the Institute of Computer Engineering and
Communications Networks Laboratory (TIK), ETH Zurich, Switzerland with his Habilitation
on the topic of `Synthesis and Optimization of Digital Hardware/Software Systems’ in 1996.
From 1998 to 2002, he was Full Professor in the Electrical Engineering and Information Technology
Department, University of Paderborn, Germany.

His current research focuses on electronic design automation of embedded systems
with emphasis on hardware/software co-design, reconfigurable computing and multi-core systems.
Prof. Teich has organized various ACM/IEEE conferences/symposia as Program Chair
including CODES+ISSS´07, FPL´08, ASAP´10, and DATE´2016.
He serves regularly as a TPC member of many program committees including DAC, ASP-DAC, ICCAD, FPL,
ASAP, FPT, FPGA, RECONFIG, ESTIMEDIA, VLSI Design, GECCO, EMO, RTSS, etc.
He also serves in the editorial board of journals including ACM TODAES, IEEE Design and Test and JES and has
edited two text books on Hardware/Software Co-Design and recently a Handbook on this topic (Springer).

Prof. Teich is involved in many interdisciplinary projects on basic research as well as industrial projects.
From 2003-2009, he was an elected board member (Fachkollegiat) of the Deutsche Forschungsgemeinschaft (DFG)
for the area of Computer Architecture and Embedded Systems. He has been the initiator and
coordinator of the DFG priority programme 1148 on “Reconfigurable Computing”.
Since 2010, he has also been the principal coordinator of the Transregional
Research Center 89 “Invasive Computing” funded by the German Research Foundation (DFG).
In 2011, he was elected member of the Academia Europaea.

 

 
Part I: “Development of Low-End Embedded Processors for Some SoC Applications” Part II: “The Application-Specific Design for Signal Processing Applications”

Title: Part I: “Development of Low-End Embedded Processors for Some SoC Applications”
Part II: “The Application-Specific Design for Signal Processing Applications”

Speaker: Prof. Fitzgerald Sungkyung Park, Pusan National University, Busan, South Korea
Prof. Chester Park, Konkuk University, Seoul, Korea

Date and Time: Monday, August 7, 2017, 2:00 PM – 3:00 PM

Location: Donald Bren Hall 3011

Abstract:

Low-end processor cores can be utilized in various SoC applications including IoT, wireless communication, and machine learning.  In this short talk, we will introduce how we designed some low-end integer cores applied to some applications such as deeply embedded IoT and WLAN MAC Soc, and also introduce the basic design of embedded cores for neural networks.

Part II: The application-specific design for signal processing applications tends to necessitate multi-disciplinary knowledge on system, algorithm, architecture and circuit levels.  In this talk, we will introduce our application-specific design approaches for various signal processing applications.  In addition, we discuss several design challenges involved in system-on-a-chip (SoC) design for neural networks, regarding how to customize the on-chip bus architecture.

Biography:

Prof. Fitzgerald Sungkyung Park took his Ph.D. degree in electronics engineering from Seoul National University, Korea, in 2002. He worked for Samsung Electronics from 2002 to 2004, joined Electronics and Telecommunications Research Institute (ETRI) from 2004 to 2006, and worked for Ericsson, Inc., USA, from 2006 to 2009, where he developed mixed-signal circuits for radio transceivers. In 2009, he joined the faculty of Pusan National University, where he has worked on low-end processors and SoC for IoT and other applications.

Prof. Chester Park received his Ph.D. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Korea, in 2006. After about two years with Samsung Electronics Inc., Giheung, Korea, he joined Ericsson Research, USA, where he developed various signal processing algorithms for wireless communications. Since 2013, he has been with Konkuk University, Seoul, Korea, working on hardware accelerator design for signal processing algorithms.