Andy Pimentel (University of Amsterdam)
Modern embedded systems are becoming increasingly multifunctional and, as a consequence, they more and more have to deal with dynamic application workloads. This dynamism manifests itself in the presence of multiple applications that can simultaneously execute and contend for resources in a single embedded system as well as the dynamic behavior within applications themselves. Such dynamic behavior in application workloads must be taken into account during the design of multiprocessor system-on- a-chip (MPSoC)-based embedded systems. In this talk, I will present the concept of application workload scenarios to capture application dynamism and explain how these scenarios can be used for searching for optimal mappings of a multi-application workload onto an MPSoC. To this end, the talk will address techniques for both design-time mapping exploration as well as run-time mapping of applications.
Andy Pimentel is associate professor at the Informatics Institute of the University of Amsterdam, where he heads the Computer Systems Architecture group and acts as vice-chair of the System and Network Engineering Lab. He holds the MSc and PhD degrees in computer science, both from the University of Amsterdam. He is co-founder of the International Conference on embedded computer Systems: Architectures, Modeling, and Simulation (SAMOS) and is member of the European Network of Excellence on High-Performance Embedded Architecture and Compilation (HiPEAC). His research centres around system-level modeling, simulation and exploration of multi - and many - core computer systems with the purpose of effectively designing and programming these systems. More specifically, his research interests include high-level modeling and simulation methods, design space exploration, performance and power analysis, EDA, design and optimization of multi-/many-core systems, computer architecture, embedded systems, and parallel and reconfigurable computing.
Andy has (co-)authored more than 100 scientific publications. He is associate editor of Elsevier's Simulation Modelling Practice and Theory and Springer's Journal of Signal Processing Systems. He has also been guest associate editor for a large number of special issues. He served as the General Chair of HIPEAC'15, as Local Organization Co-chair of ESWeek'15, and he serves as Program (Vice-)Chair of CODES+ISSS in 2016 and 2017. Furthermore, he serves (or has served) on the TPC of basically all leading (embedded) computer systems design conferences, such as DAC, DATE, CODES+ISSS, ICCD, ICCAD, FPL, SAMOS, and ESTIMedia.
Dionisio de Niz (CMU)
Dionisio de Niz completed his Ph.D. in ECE and M.Sc. from the Information Networking Institute both from Carnegie Mellon University.
Virtualization and isolation are concepts that were introduced very early on in operating systems. From the IBM OS/360 virtual machines to the dream of MULTICS to provide computational services to the whole city of Boston, the idea of providing the perception that a user has the machine for him or herself has guided OS innovations for a long time. Indeed the need to time-share the CPU gave birth to scheduling. However, the original concept of timesharing for general purpose computing proved to be insufficient for safety-critical systems that must be guaranteed to finish by a deadline (real-time systems) motivating the NASA Apollo program to develop fixed-priority scheduling and the development of the basic rate-monotonic scheduling theory by an MIT professor and a NASA scientist. Later a number of practical issues were iron out by researchers at Carnegie Mellon University leading to its adoption in most Real-Time Operating Systems today. As real-time systems increased in size and complexity these systems started to be built by integrating components (frequently from different suppliers) that needed to be developed independent from each other and hence that should be isolated from each other. As a result, a number of temporal protection mechanisms for real-time appear in the form of “servers.” Servers evolved into operating systems abstractions called “resource reservations” that provide temporal budgets for the use of different resources including CPU, network bandwidth, disk bandwidth, memory, etc. Today, temporal isolation is one of the key components of real-time systems certification standards including the DO-178C for US Federal Aviation Administration certification and the ISO 26262 for automotive. These standards rely on temporal isolation to allow limiting re-certification only to a modified component if it can be proven that such a component cannot interfere with other components. Guided by practical requirements both of these standards recognize that the failure of different components can have different consequences, i.e., some have safety-critical consequences but others only consequences related to comfort. As a result, they allowed different correctness validations techniques that are more stringent for safety-critical components and less for less critical ones. This opened up an opportunity for a different kind of scheduling where tasks are allocated different CPU budgets for different criticality levels and each task is assigned a criticality level. A task τ_i is then guaranteed to receive the budget of its criticality level if no tasks with a higher criticality exceed the budget of the criticality level of the task τ_i. This is achieved by allowing high-criticality tasks to steal cycles from lower-criticality ones if needed. This indeed changes the temporal isolation into an asymmetric temporal protection where higher-criticality tasks are protected from lower-criticality ones but not the other way around. The Zero-Slack Rate-Monotonic (ZSRM) Scheduler is one of these mixed-criticality scheduler that offer asymmetric protection. To enable tasks that synchronized among themselves the priority-and-criticality inheritance protocol were developed for ZSRM. Similarly, ZSRM had been extended to take into account changes in tasksets and criticalities into modal systems to support dynamic systems. Furthermore, to support utility saturation, it was further extended to support concave utility curves enabling a graceful degradation into what is called the Zero-Slack Quality of Service Resource Allocation Method (ZS-QRAM). In parallel, multicore processors have brought new challenges to temporal protection given that shared hardware resources create new interference across cores. I will present our work on memory reservation and assignment. Finally, I will provide a few perspective on the future challenges on temporal protection for real-time.
Dionisio was a professor at the ITESO university in Mexico for two year after completing his Ph.D. During this time he ran the Embedded and Real-Time Systems Collaborative Laboratory. In this lab research was conducted in collaboration with other universities (including CMU) and industry partners such as Intel. He also led an interest group to develop the Automotive Embedded Software Roadmap for the Jalisco state. At the SEI he has been working in the modeling of embedded real-time systems with the Architecture Analysis and Design Language (AADL). He has been involved in the development of the reference implementation tool OSATE, development of new analysis algorithms and consulting with multiple clients in the application of AADL to solve problems in the development of embedded real-time systems. On the research front, he has been focusing on new scheduling algorithms for real-time systems with new requirements (e.g. mixed-criticality) and new platforms (e.g. multicore processors). In addition he had been looking at new techniques to verify concurrency properties in runtime architectural models.
Mauro Carvalho Chehab (Samsung Brazil Research (Open Source Group))
The largest are the systems and the IT infrastructure, the higher is the importance of Reliability Availability and Serviceability (RAS) monitoring. This speech will describe the recent changes on RAS monitoring that are available at the Linux Kernel 3.10. It will also describe the rasdaemon monitoring tool, with uses the special Kernel perf events generated by the Kernel to monitor fatal and non-fatal hardware errors that are detected by the CPU, by the memory controller and by the PCIe hardware.
Mauro is the upstream maintainer of the Linux kernel media and EDAC subsystems, and also a major contributor for the Reliability Availability and Serviceability (RAS) subsystems. Mauro also maintains Tizen on Yocto packages upstream. He works for the Samsung Open Source Group since 2013. Has worked during 5 years at the Red Hat RHEL Kernel team, having broad experience in telecommunications due to his work at some of the Brazilian largest wired and wireless carriers.