Loading...
 

Keynote Speaches

Plugin execution pending approval

This plugin was recently added or modified. Until an editor of the site validates the parameters, execution will not be possible.

Schedulability-driven Optimization of Real-time Systems

Leandro Indrusiak

Reader at the Computer Science department of the University of York

Schedulability tests are able to determine whether a given embedded system is able to meet the real-time requirements of its application-specific workload, even in a worst-case scenario. They do so by carefully modeling the workload, the available resources in the embedded system (e.g. processors, buses, networks, memory controllers), and the respective resource sharing and scheduling policies. This talk will review the basics of schedulability tests through examples and will build on that foundation to describe state-of-the-art research using sophisticated schedulability tests that can guide optimization heuristics towards solutions that fully meet a system's real-time requirements. To illustrate the potential and industrial relevance of that approach, we will cover case studies based on wireless networks, on-chip multiprocessors, and large-scale industrial cyber-physical systems. The talk will be closed with an overview of the latest research attempting to automate the creation of schedulability tests for different types of real-time systems.

Leandro Soares Indrusiak has been a faculty member of the University of York's Computer Science department since 2008. He is a member of the Real-Time Systems (RTS) research group, working on real-time systems and networks, distributed embedded systems, on-chip multiprocessing, energy-efficient computing, cyber-physical systems, cloud and high-performance computing, and several types of resource allocation problems (in computing, manufacturing, and transportation). He has published more than 150 peer-reviewed papers in the main international conferences and journals covering those topics (nine of them received best paper awards). He is or has been a principal investigator in projects funded by EU, EPSRC, DFG, British Council, FAPERGS, and industry. He graduated in Electrical Engineering from the Federal University of Santa Maria (UFSM) in 1995, obtained an MSc in Computer Science from the Federal University of Rio Grande do Sul (UFRGS) in 1998, and was issued a binational doctoral degree by UFRGS and Technische Universität Darmstadt in 2003. Prior to his appointment at York, he held a tenured assistant professorship at the Informatics department of the Catholic University of Rio Grande do Sul (PUCRS) (1998-2000) and worked as a researcher at the Institute of Microelectronics Systems at TU Darmstadt (2001-2008).


-

Plugin execution pending approval

This plugin was recently added or modified. Until an editor of the site validates the parameters, execution will not be possible.

From Partitioning to Management: Tackling Memory Contention with Fine-grained Profiling and Control

Renato Mancuso

Assistant Professor at the Department of Computer Science at Boston University


Over the last decade, embedded computing platforms have exploded in complexity. The push for complexity has been driven by the need for context-awareness in next-generation cyber-physical systems. That is, the ability to exploit knowledge of the environment and to make complex decisions based on a multitude of sensory streams. Unfortunately, as platforms grow in complexity to improve context-awareness, the inter-play between concurrent software components and the underlying hardware becomes hard to predict and to reason about. The latter can be thought of as the capacity to achieve self-awareness. Therefore, there exists a fundamental tension between context- and self-awareness. The lack of temporal isolation in modern platforms has shaken the foundations of real-time theory, embedded systems design, verification, and validation. Seminal results have been accomplished to mitigate temporal interference and achieve strong performance isolation via hardware resource partitioning. But the problem largely remains an open research question. At its core, the issue of temporal interference shares many similarities with a class of problems in security threat identification and mitigation — namely time-based side-channel attacks. Unsurprisingly, both challenges trace back to a lack of self-awareness in modern platforms. Which begs the question: can we set aside resource partitioning as "poor man's management" and elevate self-awareness in modern embedded systems instead? In this talk, I will walk you through some of the milestones in hardware resource partitioning that have led to important changes in the way we design modern operating systems and hypervisors. I will then review the latest advancements propelled by my research lab in techniques to build knowledge of the application workload, which is a crucial stepping-stone for self-awareness. I will also review the fundamental mechanisms that can allow exerting fine-grained monitoring and control over hardware resources and data-flows. Finally, I will discuss an overarching vision for how self-awareness can be achieved in current and future embedded platforms.

Renato Mancuso is currently an assistant professor in the Computer Science Department at Boston University. Before starting his tenure track in fall 2017, Renato received his Ph.D. from the Department of Computer Science at the University of Illinois at Urbana-Champaign. He is interested in high-performance cyber-physical systems, with a specific focus on techniques to enforce strong performance isolation and temporal predictability in multi-core systems. He has published more than 30 papers in major conferences and journals. His papers were awarded two best paper awards and a best presentation award at the Real-Time and Embedded Technology and Applications Symposium (RTAS). Some of the design principles for real-time multi-core computing proposed in his research have been officially incorporated in recent certification guidelines for next-generation avionics and endorsed by government agencies, industries, and research institutions worldwide. Renato is also the information director of the ACM SIGBED special interest group on embedded systems. His research is supported by the NSF, Bosch GmbH, and Red Hat. He is a member of the IEEE.


-

Plugin execution pending approval

This plugin was recently added or modified. Until an editor of the site validates the parameters, execution will not be possible.

Early Soft Error Assessment: the challenges, benefits and drawbacks

Luciano Ost

Lecturer in Digital Electronics at the School of Mechanical, Electrical, and Manufacturing Engineering of the Loughborough University


Multicore electronic computing systems are incorporating more functionalities and new technologies to their software stacks (i.e., kernels, drivers, and machine learning-based applications). The software stacks running on such architectures differ in terms of security, reliability, performance, and power requirement. While supercomputer software development considers performance as primary criteria, software stacks embedded in critical safety systems, such as cars, must comply with strict safety and reliability requirements, which are defined by specific standards such as the ISO 26262 Road vehicles Functional Safety. To ensure the failsafe functionality of such systems, reliability engineers should be able not only to identify but also explore efficient mitigation solutions to reduce the occurrence of soft errors at the initial design cycle. In this talk, I will discuss the key challenges, benefits, and drawbacks of assessing the soft error reliability of underlying systems.

Luciano Ost is currently a Faculty Member with Loughborough University’s Wolfson School, where among other activities, he serves as the Programme Director of the Electronic and Computer Systems Engineering program. He received his Ph.D. degree in Computer Science from PUCRS, Brazil in 2010. During his Ph.D., Dr Ost worked as an invited researcher at the Microelectronic Systems Institute of the Technische Universitaet Darmstadt (from 2007 to 2008) and at the University of York (October 2009). After the completion of his doctorate, he worked as a research assistant (2 years) and then as an assistant professor (2 years) at the University of Montpellier II in France. He has authored more than 90 papers and his research is devoted to advancing hardware and software architectures to improve performance, security, and reliability of life-critical and multiprocessing embedded systems.


-

Plugin execution pending approval

This plugin was recently added or modified. Until an editor of the site validates the parameters, execution will not be possible.

Building and Executing ML Models at Ultra-Fast Speeds for the Particle Collider

Claudionor José Nunes Coelho

Palo Alto Networks


While the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices with hard real-time constraints demand very efficient inference engines, e.g. with the reduction in model size, speed, and energy consumption. In this talk, we introduce a novel method for designing heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference, and fully automated deployment on-chip. Our technique combines AutoML and QKeras (which is called AutoQKeras), combining layer hyperparameter selection and quantization optimization. Users can select among several optimization strategies, such as global optimization of network hyperparameters and quantizers, or splitting the optimization problems into smaller search problems to cope with search complexity. We have applied this design technique for the event selection procedure in proton-proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and latency of O(1) us is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on FPGA hardware are achieved.

Claudionor N. Coelho is the VP/Fellow for AI - Head of AI Labs at Palo Alto Networks. Previously, he worked on Machine Learning/Deep Learning at Google. He is the creator of QKeras, a Deep Learning package for quantization on top of Keras with support for automatic quantization. He was the VP of Software Engineering, Machine Learning, and Deep Learning at NVXL Technology. He did seminal work on AI at Synopsys Inc, and he was the GM for Brazil for Cadence Design Systems, following the acquisition of Jasper Design Automation, where he was the Worldwide SVP of R&D. He has more than 80 papers, patents, academic, and industry awards. He is currently an Invited Professor for Deep Learning at Santa Clara University, and previously, he was an Associate Professor of Computer Science at UFMG, Brazil. He has a Ph.D. in EE/CS from Stanford University, an MBA from IBMEC Business School, and an MSCS and BSEE (summa cum laude) from UFMG, Brazil.


-