*banner
 
Margin Top
Margin Bottom

CHESS Seminar

For recent seminars, see the Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) page.

2009 presentations

From Boolean to Quantitative System Specifications
Tom Henzinger, EPFL and IST Austria, August 6, 2009.
Time and location: 4-5pm (540 Cory)

The boolean view holds that given a model and a specification, the specification is either true or false in the model. State transition models have been extended to accommodate certain numerical quantities, such as time stamps and probabilities. However, the boolean view of specifications is still prevalent, for example, in checking whether a timed model satisfies a formula of a real-time logic, or whether a stochastic process satisfies a formula of a probabilistic logic. We advocate the fully quantitative view, where a specification can be satisfied to different degrees, that is, the value of a given specification in a given model is a real number rather than a boolean. We survey some results in this direction and present some applications, not only in the timed and probabilistic domains, but also for specifying and analyzing the resource use, cost, reliability, and robustness of a system.

Dissertation Talk - Model Transformation with Hierarchical Discrete-Event Control
Thomas Huining Feng, University of California, Berkeley, May 6, 2009.
Time and location: 5-6pm (540 Cory)

A flexible and efficient model transformation technique is developed in this work. A basic transformation is defined with a transformation rule, which describes the relationship between an input model and the result of the transformation. The syntax for transformation rules is close to the modeling language that designers use to build the input models. The semantics is defined based on graph transformation theory. A transformation workflow consists of basic transformations controlled in a control language. Existing control languages include state machines and variants of dataflow diagrams. Realizing their limitation in expressiveness and efficiency, we create the Ptera (Ptolemy event relationship actor) model of computation based on event graphs. Ptera is shown to be an appropriate control language for transformation workflows. It allows transformation tasks to be composed hierarchically. The event queue and the notion of model time enables scheduling of future tasks. The use of a local variable to represent the model under transformation eliminates the overhead caused by communication with transient data packages.

Interval-based Abstraction Refinement and Applications
Pritam Roy, University of California, Santa Cruz, May 5, 2009.
Time and location: 4-5pm (540 Cory)

Games that model realistic systems can have very large state-spaces, making their direct solution difficult. This talk describes a set of abstraction refinement algorithms on different game models that adaptively refine the abstraction using interval-based procedures. First I would present a novel abstraction technique that allows the analysis of reachability and safety properties of Markov decision processes (MDPs) with very large state spaces. The technique, called Magnifying-Lens Abstraction (MLA), copes with the state-explosion problem by partitioning the state-space into regions, and by computing upper and lower bounds for reachability and safety properties on the regions, rather than on the states. To compute these bounds, MLA iterates over the regions, considering the concrete states of each region in turn, as if one were sliding across the abstraction a magnifying lens that allowed viewing the concrete states. The algorithm adaptively refines the regions, using smaller regions where more detail is needed, until the interval (the difference between upper and lower bounds) is smaller than a specified accuracy. The second part of the talk presents a general interval-based abstraction-refinement framework that can handle both qualitative and quantitative systems. I will discuss the property required of the abstraction scheme in order to achieve convergence and termination of our technique. I would also provide a special version of the interval-based symbolic algorithm to evaluate the reachability and safety property for the transition systems and two-player games.

Underpinning Empirical Architecture in a Concurrent Cyber-Physical World
Graham Hellestrand, Embedded Systems Technology Inc., San Carlos, California, April 28, 2009.
Time and location: 4-5pm (540 Cory)

Empirical architecture in electronics and software engineering is both rare and likely rudimentary. It is a costly and time consuming business playing architecture with physical prototypes especially for the large scale, distributed real-time control systems of automobiles and aircraft. The only effective way to play architecture, and its enabling game of optimization and attendant game of verification, is through models and simulation. Apart from small, but critical, aspects of such systems formally proving correctness and optimality are regarded as intractable endeavours. Unintuitively, the derivation of a specification from a requirement is most efficaciously performed using modelling and simulation of the defining use cases, with the fillip of contemporaneously deriving system-level verification suites. Requirements informally define a system and its subsystems each within a context. The specification of a system must observe the system context relationship; as must its verification.

A control system in isolation is interesting but not so useful; when coupled with real-world plant or models of plant that it controls, it forms a system. Physical plant lives in the continuous time and signals domains; its control does not. Effective specification, architecture, optimization and design technologies and methodologies for these, so called, cyber-physical systems must bridge the continuous discrete time and signals gaps.

This presentation will discuss such a system designed and built by Embedded Systems technology, Inc., its fundamental innovations and its design and operating constraints. And, with the dice being favourable, demonstrate the extraordinary capability with which such a system endows the processes of architecting, optimizing, designing and verifying large scale distributed real-time systems.

Modeling, Simulation and Analysis of Integrated Building Energy and Control Systems
Michael Wetter, Simulation Research Group, Lawrence Berkeley National Laboratory, Berkeley, April 21, 2009.
Time and location: 4-5pm (540 Cory)

The design and operation of integrated building energy and control systems pose formidable challenges with respect to the interdisciplinary nature and with respect to the various temporal and spatial scales that govern the transient response of such systems. We start this presentation by discussing the various temporal scales of such systems. Next, we discuss the current approach of building energy system simulation and its applicability to controls analysis. We present ongoing activities in building system modeling based on the Modelica language that supports the computer-simulation based rapid prototyping of new energy systems, as well as the development and analysis of building control systems. We also discuss extensions to the Ptolemy II program that led to the Building Controls Virtual Test Bed which is used for co-simulation of building energy and control systems. Various applications that are enabled by such a new approach to building modeling and simulation will be presented. We close by presenting R&D needs to make the above discussed technologies wider applicable.

On the synthesis of correct-by-design embedded control software
Paulo Tabuada, University of California, Los Angeles, April 17, 2009.
Time and location: 1-2pm (212 Cory)

The tight integration of computational systems with the physical world, now known as Cyber-Physical Systems, is believed to hold great promise for the competitiveness of several industries such as automotive, aerospace, chemical production, energy, healthcare, manufacturing, etc. Within Cyber-Physical Systems, embedded control software plays a major role by monitoring and regulating several continuous variables through feedback loops. Nevertheless, the design of embedded control software is still considered an art for which no rigorous and comprehensive design methodologies exist. In order to detect and eliminate design flaws, and the unavoidable software bugs, a large part of the design budget is consumed with validation and verification efforts. In this talk, I will propose a new paradigm for the synthesis of embedded control software. By changing the emphasis from verification to design, I will show that it is possible to synthesize correct-by-design embedded control software while providing formal guarantees of correctness and performance. This proposal is grounded on several technical results describing how differential equation models for control systems can be transformed into equivalent finite-state models for the purpose of software design. The new finite-state models offer a common language within which it is possible to express and solve design problems meeting computational and physical requirements. In particular, I will show how it is possible to automatically synthesize controllers enforcing discrete specifications (languages, finite-state machines, temporal logics, etc) on continuous systems. The synthesized controllers describe both the continuous control laws as well as its correct-by-design software implementation.

Embedded system design with the Polychronous paradigm
Albert Benveniste, INRIA/IRISA, Rennes, France, April 14, 2009.
Time and location: 4-5pm (540 Cory)

While the term of synchronous languages has largely been responsible for the success of the concept, it also has resulted in misleadings regarding what they are. In fact, the synchronous paradigm has much wider applicability and is much more flexible than its popular "round based MoC" is supposed to make it. This is why Paul Le Guernic has decided to promote the (less overloaded but also more cryptic) term of Polychrony instead. But keep cool: this talk is indeed about synchronous languages. It will just be a somehow heterodox view of them. Instead of highlighting simplicity (following Paul Caspi and Nicolas Halbwachs), I shall highlight power and flexibility. I do not claim I shall present the right view, but am sure this view will complement in a useful way everything you always wanted to know about synchronous languages...

Mobile Millennium: using smartphones to monitor traffic in privacy aware environments
Alexandre Bayen, University of California, Berkeley, April 7, 2009.
Time and location: 4-5pm (540 Cory)

This talk describes how the mobile internet is changing the face of traffic monitoring at a rapid pace. In the last five years, cellular phone technology has bypassed several attempts to construct dedicated infrastructure systems to monitor traffic. Today, GPS equipped smartphones are progressively morphing into an ubiquitous traffic monitoring system, with the potential to provide information almost everywhere in the transportation network. Traffic information systems of this type are one of the first instantiations of participatory sensing for large scale cyberphysical infrastructure systems. However, while mobile device technology is very promising, fundamental challenges remain to be solved to use it to its full extent, in particular in the fields of modeling and data assimilation. The talk will present a new system, called Mobile Millennium, launched recently by UC Berkeley, Nokia and Navteq, in which the driving public in Northern California can freely download software into their GPS equipped smartphones, enabling them to view traffic in real time and become probe vehicles themselves. The smartphone data is collected in a privacy-by-design environment, using spatially aware sampling. Using data assimilation, the probe data is fused with existing sensor data, to provide real time estimates of traffic. The data assimilation scheme relies on the appropriate use of Ensemble Kalman Filtering on networked hyperbolic first order partial differential equations, and the construction of lower-semicontinuous viability solutions to Moskowitz Hamilton-Jacobi equations. Results from experimental deployments in California and New York will be presented, as well as preliminary results from a pilot field operational test in California, which is planned to reach 10,000 probe vehicles in a few months. More information at: http://www.traffic.berkeley.edu

1: Design as You See FIT: System-Level Soft Error Analysis of Sequential Circuits & 2: Optimizations of an Application-Level Protocol for Enhanced Dependability in FlexRay
Daniel Holcomb and Wenchao Li, UC Berkeley, March 31, 2009.
Time and location: 4-5pm (540 Cory)

First 30 minutes --

Design as You See FIT: System-Level Soft Error Analysis of Sequential Circuits Abstract: Soft errors in combinational and sequential elements of dig- ital circuits are an increasing concern as a result of technol- ogy scaling. Several techniques exist for gate and latch harden- ing to synthesize circuits that are tolerant to soft errors. However, each such technique has associated overheads of power, area, and performance. In this work, we present a new methodology to compute the failures intime (FIT) rate of a sequential circuit where the failures are at the system-level. System-level failures are detected by monitors derived from functional specifcations. Our approach includes efficient methods to compute the FIT rate of combinational circuits (CFIT), incorporating effects of logical, timing, and electrical masking. The contribution of circuit components to the FIT rate of the overall circuit can be computed from the CFIT and probabilities of system-level failure due to soft errors in those elements. Designers can use this information to perform Pareto-optimal hardening of selected sequential and combinational components against soft errors. We present ex- perimental results demonstrating that our analysis is efficient, accurate, and provides data that can be used to synthesize a low-overhead, low-FIT sequential circuit.

Second 30 minutes --

Optimizations of an Application-Level Protocol for Enhanced Dependability in FlexRay

FlexRay is an automotive standard for high-speed and reliable communication that is being widely deployed for next generation cars. The protocol has powerful errordetection mechanisms, but its error-management scheme forces a corrupted frame to be dropped without any notification to the transmitter. In this paper, we analyze the feasibility of and propose an optimization approach for an application-level acknowledgement and retransmission scheme for which transmission time is allocated on top of an existing schedule. We formulate the problem as a Mixed Integer Linear Program. The optimization is comprised of two stages. The first stage optimizes a fault tolerance metric; the second improves scheduling by minimizing the latencies of the acknowledgment and retransmission messages. We demonstrate the effectiveness of our approach on a case study based on an experimental vehicle designed at General Motors.

Integrative Modeling and Heterogeneous System Simulation
Dr. Jonathan Sprinkle, University of Arizona, March 17, 2009.
Time and location: 4-5pm (540 Cory)

The promise of Model-Integrated Computing for advanced systems simulation (especially cyber-physical systems) is in its ability to integrate many different tools, simulation packages, execution environments, and operating systems while specifying the necessary details in an appropriately terse, and domain-specific, model. In this talk, we provide some context for this approach, used by researchers at the University of Arizona to control UAVs in the Command and Control (C2) Wind Tunnel, a project led by Janos Sztipanovits at Vanderbilt/ISIS, as well as an independent effort for integrative simulations of UGVs. Such a strategy enables significant deployable contributions in the automatic control of unmanned vehicles from a wide array of technical persons and abilities, including undergraduate researchers, in a relatively short period of time (less than half of an academic semester).

In analysis of these approaches, the core capabilities of integrative modeling must be carefully considered in the context of software synthesis for cyber-physical systems. Notably, some architectures of computational CPS may be inherently prone to variations in behavior when certain models of computation used in simulation are reused in the physical deployment.

Various portions are joint work with Janos Sztipanovits, Gabor Karsai, Claire Tomlin, Shankar Sastry, and Brandon Eames.

Manycore Vector-Thread Architectures
Christopher Batten, University of California, Berkeley, March 10, 2009.
Time and location: 4-5pm (540 Cory)

Serious technology issues are breaking down the traditional abstractions in computer engineering. Power and energy consumption are now first-order design constraints and the road map for standard CMOS technology has never been more challenging. In response to these technology issues, computer architects are turning to multicore and manycore processors where tens to hundreds of cores are integrated on a single chip. However, this breaks down the traditional sequential execution abstraction forcing software programmers to parallelize their applications. This talk will introduce a new architectural approach called vector-threading (VT) which is a first step to addressing these challenges. Vector-threading combines the energy-efficiency and simple programming model of vector execution with the flexibility of multithreaded execution.

This talk will also describe two implementations of vector-threading. The Scale VT Processor is a prototype for embedded applications implemented in a TSMC 0.18um process. Scale includes a RISC control processor and a four-lane vector-thread unit that can execute 16 operations per cycle and supports up to 128 active thraeds. The 16 sq mm chip runs at 260 MHz while consuming 0.4-1.1 W across a range of kernels. We have leveraged our insights from our first implementation of vector-threading to begin developing the Maven VT Processor. A Maven chip would include tens to hundreds of simple control processors each with its own single-lane vector-thread unit (VTU). A Maven single-lane VTU is potentially easier to implement and more efficient than a Scale multiple-lane VTU. Maven lanes can be coupled together with a combination of low-level software running on the control processor and fast hardware barriers.

Uses of Synchronized Clocks in Test and Measurement Systems
Jeff Burch and John Eidson, Agilent Labs, March 3, 2009.
Time and location: 4-5pm (540 Cory)

The last 5 years has seen increasing interest in the use of synchronized clocks in distributed system architectures. This is a direct result of the achievable synchronization performance using protocols such as IEEE 1588 and the availability of commercial silicon and infrastructure supporting these protocols. This talk will provide an update on the performance capabilities of IEEE 1588. We will also review both existing and pending applications that depend on synchronization and which in our opinion can benefit from some of the work underway in the CHESS program. In particular we will show how these techniques can be used in the field of electronic instrumentation and how the presence of a system-wide time service can be used to solve some interesting practical system problems. We will have a live demonstration of a prototype test system that uses synchronized clocks to solve problems suggested to us by a major test system integrator.

Implementing Synchronous Models on Distributed Execution Platforms
Stavros Tripakis, University of California, Berkeley, February 24, 2009.
Time and location: 4-5pm (540 Cory)

We study semantics-preserving implementation of synchronous models on asynchronous, distributed execution platforms. In particular, we show how to map synchronous block diagrams (a notation at the heart of widespread tools such as Simulink) onto a loosely time-triggered distributed architecture (LTTA). LTTA is fairly straightforward to implement as it does not require global synchronization or blocking communication. We achieve this mapping by introducing an intermediate layer, called Finite FIFO Platform (FFP). FFP is similar to a Kahn Process Network, but with finite queues. FFP can be easily implemented on top of LTTA. The key to achieving semantic preservation it to provide queue bounds that are sufficient to ensure that the FFP doesn't deadlock. This, and Kahn's determinacy result imply that the sequences of observed values in the asynchronous FFP implementation are identical to those of the original synchronous model. We also study performance of the asynchronous implementation, in terms of throughput and latency. We introduce worst-case, logical-time throughput and latency. These are "logical-time" in the sense that they depend only on the topology of the architecture and the sizes of the queues, but not on the real-time behavior of the distributed clocks. We show how logical-time throughput and latency can be computed algorithmically, and how they are related to real-time throughput and latency. We report on a simple implementation in Haskell. This is joint work with Claudio Pinello, Albert Benveniste, Alberto Sangiovanni-Vincentelli, Paul Caspi and Marco Di Natale.

The APES-LESS project: Access Point Event Simulation of Legacy Embedded Software Systems
Stefan Resmerita, University of Salzburg, February 17, 2009.
Time and location: 4-5pm (540 Cory)

The implementation of a functional software model (with concurrency and timing properties) on a sequential execution platform may exhibit behaviors unaccounted for in the model, due to influences of platform-specific parameters. In particular, dataflow and timing properties of the model may be violated in the implementation as a result of the interplay of execution times and preemptive scheduling. In the embedded systems industry, the established mode of validating properties of an implementation is extensive testing, which is done either by simulating the application mapped on a detailed model of the platform (usually using an instruction set simulator) or by using a hardware-in-the loop setup. Both processes are expensive in terms of the time, as well as the involved human and logistic resources. We present in this work an approach for fast simulation of software models mapped to platform abstractions. In particular, we focus on legacy automotive control software in C, running on an OSEK operating system. We refer to a line of source code containing an I/O access (with respect to the owner task) as an access point. In a run of the software, an "access point event" (APE) occurs whenever the code of an access point starts executing. The proposed simulation framework is based on three main ingredients: (1) The ability to catch all APEs during an execution, (2) The ability to stop and then resume the execution of a task at any access point, and (3) The ability to determine the (platform-specific) execution time of the portion of code between any two consecutive access points of the same task. We achieve (1) by instrumenting the application code, i.e., inserting a callback to the simulation engine before every access point. Feature (2) is obtained by executing each application task in a separate (Java) thread, which can be paused and then resumed as necessary in the callback function. Objective (3) can be achieved by leveraging existent methods for estimation of execution times. The simulation engine employs the discrete-event domain in Ptolemy II to process the APEs generated by the application tasks, with timestamps given by the execution times determined at (3) and reported to the engine as parameters of the instrumentation callbacks. Task management operations are provided by a partial OSEK implementation in Ptolemy II.

Model-Based Development of Fault-Tolerant Real-Time Systems
Christian Buckl, TU Munchen, February 10, 2009.
Time and location: 4-5pm (540 Cory)

The design of fault-tolerant real-time systems is a complex task. The system must not only satisfy real-time requirements, but it must also deliver the specified functionality in the presence of both hardware and software faults. This talk presents a model-based development tool for the design of fault-tolerant real-time systems. The tool focuses on the code generation of non-functional requirements and therefore complements existing tools. The major contribution is the presentation of adequate models that can be used to model fault-tolerant systems and generate the code automatically. These models comprise a formal description of the hardware architecture, the software components and their temporal behavior, the fault assumptions, and the selected fault-tolerance mechanisms. Using a template-based code generator, the fault-tolerant real-time system is generated. This approach allows an easy expansion of the code generation functionality and therefore offers a solution to handle the heterogeneity of fault-tolerant systems. Finally, we will outline how formal methods can be integrated to prove the correctness of the generated code.

Modular Code Generation from Synchronous Block Diagrams: Modularity vs. Reusability vs. Code Size
Stavros Tripakis, University of California, Berkeley, February 3, 2009.
Time and location: 4-5pm (540 Cory)

Synchronous block diagrams (SBDs) are a hierarchical signal-flow diagram notation with synchronous semantics. They are the fundamental model behind widespread tools in the embedded software domain, such as SCADE and the discrete-time subset of Simulink. We study modular code generation from SBDs: modular means that code is generated for a given composite block independently from context (i.e., without knowing in which diagrams this block is to be used). Previous methods fail to address this problem in a satisfactory manner. They often generate "monolithic" code, e.g., a single step-function per block. This introduces false dependencies between the inputs and outputs of the block, and compromises reusability, by not allowing the block to be used in some contexts. As a result, state-of-the-art tools either impose restrictions on the diagrams they can handle or resort to flattening. We propose a framework that fixes this by generating, for a given block, a variable number of interface functions, as many as needed to achieve maximal reusability, but no more. In the worst case, N+1 functions may be needed, where N is the number of outputs of the block. It is crucial to minimize the number of interface functions, for reasons of scalability, but also because of IP concerns. We are thus led to define a quantified notion of modularity, in terms of the size of the interface of a block. The smaller the interface, the more modular the code is. Our framework exposes fundamental trade-offs between reusability, modularity and code size. We show how to explore these trade-offs by choosing appropriate graph clustering algorithms. We present a prototype implementation and experimental results carried on Simulink models. We also describe extensions of our framework to triggered and timed diagrams. This is joint work with Roberto Lublinerman.

Time Portable Programming the JAviator in Tiptoe OS
Christoph Kirsch, University of Salzburg, January 13, 2009.
Time and location: 4-5pm (540 Cory)

The JAviator (Java Aviator) is a high-performance quadrotor model helicopter that is built around a high-integrity frame, which is horizontally and vertically symmetric, and supports high payloads through light-weight materials and advanced brushless motors. The JAviator is 1.3m in diameter, weighs 2.2kg in total, and generates a maximum lift of 5.4kg, which translates into a theoretical maximum payload of 3.2kg. Without payload the maximum flight time is around 40min. We have designed and built the JAviator completely from scratch and use it as test platform for software projects such as Exotasks, which we developed in collaboration with IBM Research, and Tiptoe, which is our own prototypical real-time operating system. Exotasks are an alternative to Java threads, and enable time-portable programming of hard real-time applications in Java, hence the name JAviator. Time-portable programs do not change their relevant real-time behavior across different hardware platforms and software workloads. Tiptoe supports even stronger forms of time portability than Exotasks, in particular, the execution of software processes whose real-time behavior, even including system-level aspects such as their I/O communication and memory management, can be predicted and maintained per process, independently of any other processes. The talk will give an overview of the JAviator's hardware and flight performance, and then focus on the latest developments in the Tiptoe kernel, in particular, its time-portable process model, scheduler design, and memory management. This is joint work with S.S. Craciunas, H. Payer, H. Roeck, A. Sokolova, H. Stadler, and R. Trummer.

2008 presentations

Synchronous Reactive Communication: Generalization, Implementation, and Optimization
Guoqiang (Gerald) Wang, University of California, Berkeley, December 9, 2008.
Time and location: 4-5pm (540 Cory)

Model-based development of embedded real-time software aims at improving quality by enabling design-time verification and simulation and fostering reuse. Model-based design is very popular because of the availability of tools for simulation and formal verification of the system properties. When implementing a high-level model into code, it is important to preserve its semantics of the underlying model of computation, so to retain the results of the simulation and verification stages. In this talk, the challenges associated with implementation of synchronous reactive models are discussed. A platform-based design methodology and implementation framework is presented. Two semantics preserving wait-free communication protocols are introduced and corresponding efficient portable implementations under the OSEK OS standard are provided. Memory optimization under timing constraints is considered. Finally, the design flow is completed with automatic code generation support.

Service Component Architecture (SCA)
Luciano Resende, IBM Silicon Valley, San Jose, California, December 2, 2008.
Time and location: 4-5pm (540 Cory)

Service Oriented Architecture (SOA) is an approach for developing solutions in IT that are flexible and reusable. SOA is not a new concept and has evolved over the years to reach the flexibility required in today's business environment. Web services introduced the loosely coupled architecture to provide application reuse, but flexibility and agility remained a challenge. Many industry vendors who who were experienced with SOA pain points pulled together and created Service Component Architecture Standard to address these challenges. Apache Tuscany provides an open source implementation for the SCA standards. Many vendors have announced their support for SCA and will be providing products in the near future. Apache Tuscany is used by some like IBM as the basis for the product offering. In this presentation we will introduce SCA to you and demonstrate how SCA can be used to bring many assets in the IT together to create useful solutions for example an on-line retail store. You can learn more about Tuscany Open source at this link: http://tuscany.apache.org/tuscany-dashboard.html

Process Network in Silicon: A High-Productivity, Scalable Platform for High-Performance Embedded Computing
Mike Butts, Ambric, Inc., Beaverton, Oregon, November 25, 2008.
Time and location: 4-5pm (540 Cory)

Conventional high-performance embedded system technology has reached fundamental scaling limits of single CPU/DSP performance and ASIC/FPGA hardware development productivity. Adapting the SMP multicore programming model from general-purpose computing has serious scaling and reliability issues as well, as shown by work here at Berkeley. Adapting the SIMD progamming model from HPC is not always a good match to increasingly irregular and complex algorithms in video, image and wireless processing. We defined a Structured Object Programming Model, based on the Kahn process network model of computation, specifically for embedded computing. Objects are strictly encapsulated software programs running concurrently on a MIMD array of processors and memories. They communicate and synchronize with one another through a structure of buffered channels, in which every data transfer is a synchronization event. An application is a hierarchy of objects and structure, programmed directly in high-level language, without need for automatic parallelizing compilers or synthesis tools. Having first defined this programming model, we defined a scalable hardware architecture, developed one teraOPS production silicon and an integrated development environment, and delivered it to a number of customers who are designing it into their products. University projects have been using it as well. Developers have found this programming model easy to learn and use, and two to three times as productive as their previous DSP, FPGA or ASIC platforms. A hardware and tools demo will be available after the seminar.

The Design of a Platform-Based Design Environment for Synthetic Biology
Douglas Densmore, UC Berkeley, Berkeley, California, November 18, 2008.
Time and location: 4-5pm (540 Cory)

Genomics has reached the stage at which the amount of DNA sequence information in existing databases is quite large. Moreover, synthetic biology now is using these databases to catalog sequences according to their functionality and therefore creating standard biological parts which can be used to create large systems. What is needed now are flexible tools which not only permit access and modification to that data but also allow one to perform meaningful manipulation and use of that information in an intelligent and efficient way. These tools need to be useful to biologists working in a laboratory environment while leveraging the experience of the larger CAD community. This talk will outline some of the major issues facing tool development for synthetic biological systems. These include issues of data management, part standardization, and composite part assembly. In addition to giving an overview and introduction to systems based on standard biological parts, the talk will discuss the Clotho design environment. This is a system specifically created at UC Berkeley to address these issues as well as be an open source, plugin based system to encourage participation by the community as a whole.

The Nimrod/K director for the Kepler workflow environment
Colin Enticott, Monash University, Australia, November 12, 2008.
Time and location: 4-5pm (540 Cory)

A challenge for Grid computing is the difficulty in developing software that is parallel, distributed and highly dynamic. In this presentation we will discuss the challenges of providing a new Grid system by combining the workflow system Kepler with a large scale Grid middleware application called Nimrod. The Kepler project provides a workflow environment with various tools to access Grid services. The Nimrod toolkit offers Grid services such as multi-resource meta-scheduling, data movement and parameter search and analysis tools. We will discuss how grid applications and their data files are represented in Nimrod/K and how we have extended the Kepler environment to support threading. Combining these two Grid applications offers a powerful environment for performing large scale Grid base workflows.

Toward the Predictable Integration of Real-Time COTS Based Systems
Marco Caccamo, University of Illinois, Urbana Champaign, October 28, 2008.
Time and location: 4-5pm (540 Cory)

Integration of COTS components in critical real-time systems is challenging. In particular, we show that the interference between cache activity and I/O traffic generated by COTS peripherals can unpredictably slow down a real-time task by up to 44%. To solve this issue, we propose a framework comprised of three main components: 1) an analytical technique that computes safe bounds on the I/O-induced task delay; 2) COTS-compatible devices, the peripheral gate and hardware server, that control peripheral access to the system; 3) a coscheduling algorithm that maximizes the amount of allowed peripheral traffic while guaranteeing all real-time task constraints. We implemented the complete framework on a COTS-based system using PCI peripherals, and we performed extensive experiments to show its feasibility.

Model Engineering
Edward A. Lee and the Ptolemy Pteam, University of California, Berkeley, Berkeley, October 21, 2008.
Time and location: 4-5pm (540 Cory)

Model-based design uses models of systems as the specification for software behavior and synthesizes executable software from those models. As with any successful engineering discipline, design techniques must evolve to support development, maintenance, and evolution of designs, and these techniques must be able to handle designs of realistic size and complexity. The discipline of software engineering provides techniques, such as object-oriented design, static analysis, and formal verification, for software designs. This talk explores techniques that support development, maintenance, and evolution of models. In particular, we discuss model transformation, model ontologies, and multimodeling, with applications to construction of large models (e.g. MapReduce), model optimization, and consistency management across multiple models.

Verification-Guided Error Resilience
Prof. Sanjit Seshia, University of California, Berkeley, Berkeley, October 7, 2008.
Time and location: 4-5pm (540 Cory)

Verification-guided error resilience (VGER) is the use of algorithmic verification (such as model checking) to estimate a system's vulnerability to reliability problems and to reduce overheads of circuit-level mechanisms for error resilience. I will discuss our results in applying VGER to soft errors in sequential circuits; for instance, on one design our approach reduced the power overhead of error resilience by a factor of 4.35. I will also discuss the connections between VGER and mutation-based techniques for evaluating coverage in formal verification.

Game-Theoretic Timing Analysis
Sanjit Seshia, UC Berkeley, Berkeley, California, September 23, 2008.
Time and location: 4-5pm (540 Cory)

Estimating the worst-case execution time (WCET) of tasks is a key step in the design of reliable real-time software and systems. I will present a new, game-theoretic approach to estimating WCET based on performing directed measurements on the target platform. We model the estimation problem as a game between our algorithm (player) and the environment of the program (adversary), where the player seeks to find the longest path through the program while the adversary sets environment parameters to thwart the player. I will present theoretical and experimental results demonstrating the utility of our approach. Applications of our theory other than WCET estimation will also be discussed.

Integrating Timing with Data and Space Availability as Firing Rules for Actors in a Graphical Dataflow Language
Tim Hayles, National Instruments, Austin, Texas, September 9, 2008.
Time and location: 4-5pm (540 Cory)

The structured dataflow of LabVIEW, like C/C++, is untimed. Add-ons do bring control of time to programs but they tend to be somewhat vendor specific and ad-hoc. Furthermore, actors with dataflow dependencies are sequential. The independently free-running yet often precisely timed behavior of I/O actors along with the creation of an experimental asynchronous wire inspired an effort to bring native support to LabVIEW for exchanging both data and timing information between free-running I/O and computational actors. This talk will further outline the motivations for creating this model of computation, the policies for data exchange between actors and how this MOC is likely to be integrated with dataflow. The timing wire abstraction itself will be discussed in some detail along with illustrative descriptions of some typcial I/O architectures and current asynchronous timing and data wire implementations.

On Computer Science in Systems Biology
Oded Maler, CNRS-VERIMAG, Grenoble, France, August 22, 2008.
Note special time and location: 4-5pm (540 Cory)

In this talk I argue that computer science is not only a functional tool in the service of Biology, but that it should be part of the mathematics and physics of Biology. I illustrate with 2 examples: adding time to genetic regulatory networks, and reducing the number of false positives in discrete abstractions of continuous dynamical systems.

From Control Loops to Software
Oded Maler, CNRS-VERIMAG, Grenoble, France, August 21, 2008.
Note special time and location: 4-5pm (540 Cory)

In this talk I explain the basic steps involved in tranforming a controller, defined as a mathematical object, into a real-time program that realizes it. I also survey the evolution of control technology from physical "computation" of the feedback function, via analog computing to sampled-data digital control.

Timed Automata: Modeling and Analysis
Oded Maler, CNRS-VERIMAG, Grenoble, France, August 20, 2008.
Note special time and location: 3-5pm (540 Cory)

In this talk I will give an introduction to timed systems, system models situated in an extremely important level of abstraction, between automata and continuous dynamical systems. After a short introduction of the principles of analysis of timed system I will devote the rest of the time to a survey of many attempts, pathetic and heroic alike, to fight the clock explosion and break the scalability barriers in order to use timed automata technology in real-life situations.

Verification for Dummies: A Gentle Introduction to Formal Verification and Hybrid Systems
Oded Maler, CNRS-VERIMAG, Grenoble, France, August 19, 2008.
Note special time and location: 3-5pm (540 Cory)

In this talk I will explain the principles underlying formal verification of discrete event systems. Starting from an example coming from everyday life, I will show how automaton models can be used for simulation, verification and controller synthesis for such systems. The talk will conclude with some hints on how to extend this methodology toward continuous systems defined by differential equations.

Modular Timed Graph Transformation
Hans Vangheluwe, McGill University, Quebec, Canada, August 6, 2008.
Note special time: 3-4pm

We propose to use the Discrete EVent system Specification (DEVS) formalism to describe and execute graph transformation control structures. We provide a short review of existing programmed graph rewriting systems, listing the control structures they provide. As DEVS is a timed, highly modular, hierarchical formalism for the description of reactive systems, control structures such as sequence, choice, and iteration are easily modelled. Non-determinism and parallel composition also follow from DEVS semantics. The proposed approach is illustrated through the modelling of a simple PacMan game, first in AToM3 and then with DEVS. We show how the use of DEVS allows for modular modification of control structure and ultimately allows real-time simulation/execution.

Stability Analysis of Switched Systems using Variational Principles
Michael Margaliot, Tel Aviv University, Israel, August 5, 2008.

Switched systems-systems that can switch between several different modes of operation-are ubiquitous in the world around us. Mathematical models that incorporate switching between several subsystems have numerous applications in many disciplines of science. We consider the stability analysis of switched systems under arbitrary switching. A promising approach for addressing this problem is based on characterizing the "most unstable" switching law. This can be done using variational principles. We also describe how the variational approach can be merged with another approach for analyzing switched systems that is based on Lie-algebraic considerations.

Buffer Capacity Computation for Throughput Constrained Streaming Applications with Data-Dependent Inter-Task Communication
Maarten Wiggers, University of Twente, The Netherlands, July 31, 2008.
Note special time and location: 2-3pm (337A Cory)

Streaming applications are often implemented as task graphs, in which data is communicated from task to task over buffers. Currently, techniques exist to compute buffer capacities that guarantee satisfaction of the throughput constraint if the amount of data produced and consumed by the tasks is known at design-time. However, applications such as audio and video decoders have tasks that produce and consume an amount of data that depends on the decoded stream. This paper introduces a dataflow model that allows for data-dependent communication, together with an algorithm that computes buffer capacities that guarantee satisfaction of a throughput constraint. The applicability of this algorithm is demonstrated by computing buffer capacities for an H.263 video decoder.

A Type System for the Automatic Distribution of Higher-order Synchronous Dataflow Programs,
Alain Girault, INRIA, June 10, 2008
Note special time: 11-12

We address the design of distributed systems with synchronous dataflow programming languages. As modular design entails handling both architectural and functional modularity, our first contribution is to extend an existing synchronous dataflow programming language with primitives allowing the description of a distributed architecture and the localization of some expressions onto some processors. We also present a distributed semantics to formalize the distributed execution of synchronous programs. Our second contribution is to provide a type system, in order to infer the localization of non-annotated values by means of type inference and to ensure, at compilation time, the consistency of the distribution. Our third contribution is to provide a type-directed projection operation to obtain automatically, from a centralized typed program, the local program to be executed by each computing resource. The type system as well as the automatic distribution mechanism has been fully implemented in the compiler of an existing synchronous dataflow programming language (namely Lucid Synchrone).

Heterogeneous System Design with Functional DIF,
William Plishker, University of Maryland, June 5, 2008.

Dataflow formalisms have provided designers of digital signal processing systems with analysis and optimizations for many years. As system complexity increases, designers are relying on more types of dataflow models to describe applications while retaining these implementation benefits. The semantic range of DSP-oriented dataflow models has expanded to cover heterogeneous models and dynamic applications, but efficient design, simulation, and scheduling of such applications has not. To facilitate implementing heterogeneous applications, we present a new dataflow model of computation that supports dynamic and static dataflow-based heterogeneous system design. With this semantic foundation, we show how actors designed in other dataflow models are directly supported by the functional dataflow interchange format (DIF) package, allowing system designers to immediately compose and simulate actors from different models. Schedules can then be expressed using a standard representation (the generalized schedule tree) which accommodates these dataflow models. Using an example, we show how this approach can be applied to quickly describe and functionally simulate a heterogeneous dataflow-based application.

Advanced topics in model-based software development
Bernhard Rumpe, Braunschweig University of Technology, June 3rd, 2008
Note special time and place: 10-11 521 Cory

Model-based software & systems development has not yet become that successful and efficient as the concept of modelling has promised. We are going to investigate, what models are, where the can be successfully used and what the obstacles of model-based developments are. We investigate in specific topics, such as formal semantics of models, model composition and refinement and model-based architectural evolution to understand what potential lies in the use of models. E.g. evolutionary approaches become increasingly important due to faster changes of requirements as well as technologies both in the business and in the embedded systems domain. We therefore show how software evolution can be managed on an architectural level provided that reliability ensuring mechanisms, such as test models exist. The key idea of the approach is to use model artifacts to describe the architecture of a system and others to model tests. An architecture is evolved using systematic refactoring techniques and thus becomes a lot easier when regression tests allow to repeatedly check the correctness of each evolution step and thus retain reliability of the product during the evolutionary process.

Formal Specification and Analysis of Real-Time Systems in Real-Time Maude
Peter Csaba Olveczky, University of Oslo, May 13, 2008

Real-Time Maude is a tool that extends the rewriting-logic-based Maude system to support the executable formal modeling and analysis of real-time systems. Real-Time Maude is characterized by its general and expressive, yet intuitive, specification formalism, and offers a spectrum of formal analysis methods, including: rewriting for simulation purposes, search for reachability analysis, and temporal logic model checking. Our tool is particularly suitable to specify real-time systems in an object-oriented style, and its flexible formalism makes it easy to model different forms of communication.

This modeling flexibility, and the usefulness of Real-Time Maude for both simulation and model checking, has been demonstrated in advanced state-of-the-art applications, including scheduling and wireless sensor network algorithms, communication and cryptographic protocols, and in finding several bugs in embedded car software that were not found by standard model checking tools employed in industry.

This talk gives a high-level overview of Real-Time Maude and some of its applications, and briefly discusses completeness of analysis for dense-time systems.

(Joint work with Gwenael Delaval and Marc Pouzet)

Partial Evaluation for Optimized Compilation of Actor-Oriented Models
Gang Zhou, Monday, May 12, 2008, 3:30-4:30

One major achievement of traditional computer science is systematically abstracting away the physical world. The Von Neumann model provides a universal abstraction for sequential computation. The concept is so simple and powerful for transformational systems (vs. reactive systems) that any traditional program can run regardless of underlying platform ---- whether it is a supercomputer or a desktop. Embedded software systems, however, engage the physical world. Time, concurrency, liveness, continuums and reactivity must be remarried to computation. In order to reason about these properties, a programming model must be able to express these properties directly.

The traditional techniques for doing concurrent programming on top of sequential programming languages like C use threads complemented with synchronization mechanisms like semaphores and mutual exclusion locks. These methods are at best retrofits to the fundamentally sequential formalism and are hard to understand, difficult to debug, brittle and error-prone.

Actor-oriented design presents a high level abstraction for composing concurrent components. There is a rich variety of concurrency formalisms that span the whole spectrum from most expressive to most analyzable. In particular, I will focus on one particular model of computation called heterochronous dataflow (HDF) which strikes a nice balance between expressiveness and analyzability.

However, high level abstraction often introduces overhead and results in slower systems. In component based design, generic components are highly parameterized for reusability and thus are less efficient. The well-defined interface between components results in flexible composition but increases the cost of inter-component communication through the interface.

In the second part of this talk, I will address the problem of generating an efficient implementation from a high level specification. I use partial evaluation as an optimized compilation technique for actor-oriented models. Compared with traditional compiler optimization, partial evaluation for embedded software works at the component level and heavily leverages domain-specific knowledge. Through model analysis---the counterpart of binding-time analysis in the use of partial evaluation for general purpose software, the tool can discover the static parts in the model including data types, buffer sizes, parameter values, model structures and execution schedules, etc. and then exploit the already known information to reach a very efficient implementation. I use a helper-based mechanism, which leads to flexible and extensible code generation framework. The end result is that the benefit offered by high level abstraction comes with (almost) no performance penalty.

The code generation framework described in this talk has been released in open source as part of Ptolemy II and can be downloaded from http://ptolemy.org/

Specification and Analysis of Electronic Contracts
Gerardo Schneider, University of Oslo, May 6, 2008

In this talk I will describe CL, a language for writing (electronic) contracts. The language allows to write (conditional) obligations, permissions and prohibitions, and to represent the so-called contrary-to-duties (CTDs) and contrary-to-prohibitions (CTPs). CTDs and CTPs are useful to establish what happen in case obligations, respectively prohibitions, are not respected. The approach is action-based, meaning that the above normative notions are defined on actions and not on state-of-affairs. I will then sketch some initial ideas on model checking electronic contracts. Finally, I will present some open problems and application domains. (Joint work with Cristian Prisacariu and Gordon Pace)

Anytime Control Algorithms for Embedded Real-Time Systems"
Luca Greco, University of Salerno, April 29, 2008

In this talk we consider scheduling the execution of a number of different software tasks implementing a hierarchy of real-time controllers for a given plant. Controllers are considered to be ordered according to a measure of the performance they provide. Software tasks implementing higher-performance controllers are assumed to require a larger worst-case execution time. Control tasks are to be scheduled within given availability limits on execution time, which change stochastically in time. The ensuing switching system is prone to instability, unless a very conservative policy is enforced of always using the simplest, least performant controller. The presented method allows to condition the stochastic scheduling of control tasks so as to obtain a better exploitation of computing capabilities, while guaranteeing almost sure stability of the resulting switching system.

When can a UAV get smart with its operator, and say 'NO!'?
Prof. Jonathan Sprinkle, University of Arizona, April 15, 2008

This talk describes reachability calculations for a hybrid system formalism governing UAVs interacting with another vehicle in a safety-critical situation. We examine this problem to lay the foundations toward the goal of certifying certain protocols for flight critical systems. In order to pursue these goals, we describe here what mathematical foundations are necessary to inform protocol certification, as well as describe how such formalisms can be used to automatically synthesize simulations to test against certain danger areas in the protocol. This can provide a mathematical basis for the UAV to perhaps reject a command based on the known unsafe behavior of the vehicle. We describe how creating this formalism can help to refine or design protocols for multi-UAV and/or manned vehicle interaction to avoid such scenarios, or to define appropriate behaviors in those cases. Additional discussion regarding how to specify such protocols using model-based approaches, and to check decision-authority models will be included.

Model-Based Design of a Power Window System: Modeling, Simulation, and Validation
Pieter J. Mosterman, The MathWorks, April 8, 2008.

Model-Based Design allows experimenting with different design alternatives, even in very early conceptual design stages, while having an executable specification and taking detailed implementation effects into account. This is in contrast to a document based approach where each of the design stages generates new models of the same system under design from the specification of the previous design stage. This presentation illustrates Model-Based Design of a power window control system as typically found in modern automobiles. A typical power window system is designed to meet various requirements, e.g., there should be a 10 [cm] bounce-back in case an obstacle is detected, and the maximum force exerted on the obstacle should be 100 [N]. Plant model of the power window system at various levels of detail are created to gradually refine the controller towards an implementation.

From Automated Software Testing to Likely Program Invariant Generation
Koushik Sen, UC Berkeley, March 18, 2008.

Testing with manually generated test cases is the primary technique used in industry to improve reliability of software--in fact, such testing is reported to account for over half of the typical cost of software development. I will describe Concolic Testing, a systematic and efficient method which combines random and symbolic testing. Concolic testing enables automatic and systematic testing of large programs, avoids redundant test cases and does not generate false warnings. Experiments on real-world software show that concolic testing can be used to effectively catch generic errors such as assertion violations, memory leaks, uncaught exceptions, and segmentation faults. In the second part of my talk, I will present a novel application of concolic testing to generate likely data structure invariants.

Numerical solution of nonlinear differential equations in musical synthesis
David Yeh, Stanford, March 11, 2008.

In creating algorithms for musical synthesis based upon physical modeling, one often assumes a linear system to take advantage of transformations that simplify the computation. In many cases, nonlinearities are responsible for the musically interesting variations over time that characterize an instrument, and such assumptions reduce the realism of the algorithm. Often one seeks efficient methods to emulate the nonlinear effect; however, the full dynamics are not captured unless the nonlinear differential equations are being solved.

The talk will begin with a summary of results from the presenter's work investigating the performance of various simple numerical integration methods on a particularly difficult circuit example involving a saturating nonlinearity. This talk then reviews two numerically based methods in the musical acoustics literature to directly solve nonlinear ordinary differential equations in real-time for musical synthesis. These methods can generate explicit computations to process incoming signals robustly in real-time even though the nonlinearities in the system may be implicitly defined. This explicitness provides opportunities to improve run-time performance on parallel hardware. An example found in the literature to simulate the partial differential equations of nonlinear plates using an FPGA will demonstrate the need for parallel hardware. Suggestions are sought from the audience regarding the parallelizability of these and other algorithms to solve differential equations numerically.

Single and Multi-CPU Performance Modeling for Embedded Systems
Trevor Meyerowitz, UC Berkeley, February 26, 2008.

The combination of increasing design complexity, increasing concurrency, growing heterogeneity, and decreasing time to market windows has caused a crisis for embedded system developers. To deal with this, a growing number of microprocessors are being used to construct these systems, making software and its timing have become dominating factors. The use of higher level models for design space exploration and early software development is critical. Much progress has been made on increasing the speed of cycle-level simulators for microprocessors, but they may still be too slow for large scale systems and are too low-level for doing design space exploration. Another problem is that constructing such optimized simulators is in itself a significant task, and they generally arenÆt that flexible.

This work focuses on modeling the performance of software executing on embedded processors in the context of a heterogeneous multi-processor system on chip in a more flexible and scalable manner than current approaches. The thesis is that such systems needs to be modeled at a higher level of abstraction and, to ensure accuracy, the higher level must have a connection to lower-levels. First, we describe different levels of abstraction for modeling such systems and how their speed and accuracy relate. Next, the high-level modeling of both individual processing elements and also a bus-based microprocessor system are presented. Finally, an approach for automatically annotating timing information obtained from a cycle-level model back to the original application source code is developed. The annotated source code can then be simulated without the underlying architecture and still maintain good timing accuracy. The above described methods are driven by execution traces produced by lower level models and were developed for ARM microprocessors and MuSIC, a heterogeneous multiprocessor for Software Defined Radio from Infineon. For running on the same code with the same data set the annotated source code executed between one to three orders of magnitude faster than equivalent cycle-level models, and had an average error magnitude of 4% (and a maximum of 20%).

Model-Based Development of Fault-Tolerant Real-Time Systems
Prof. Alois Knoll, Technical University of Munich, February 19, 2008.

Model-based development tools and code generators are frequently used at functional level. However in the context of fault-tolerant real-time systems, the code generation of code at system level (communication in the distributed system, process management and scheduling, fault-tolerance mechanisms) would be even more important. Two problems can be identified that need to be solved in this context. Current modeling languages like UML do not provide rigorous enough execution semantics for extensive code generation. In addition, the generated code is dependant on the used platform (hardware, operating system). Therefore, it is impossible to design a code generator that a priori supports all possible platforms. Rather, an extensible code generation framework has to be proposed. In this talk, we will present solutions for both problems. We will present a domain specific language with explicit execution semantics (e.g. time-triggered computation model). The model covers different aspects like hardware architecture, software components, their interaction and temporal behavior, expected faults and their effects and finally the fault-tolerance mechanisms. We will show that the requirements specific to fault-tolerant computing like replica determinism, support of state synchronization and previously known points in time for the distributed execution of fault-tolerance mechanisms are satisfied by this modeling language. In addition, we will present a template-based code generation technique that allows an easy extension of the code generation functionality. Using two lab applications as example, we will demonstrate the benefits of our approach.

Enhancing the Visual Experience on the Mobile Computing and Communications Platforms
Achin Bhowmik, Intel Corporation, February 12, 2008.

Mobile computers have been going through unprecedented advancements in the recent years, marked by an impressive annual volume of >100MU in 2007, which is expected to double in another 4 years. The advent of high-performance notebook computers coupled with the ubiquitous high-speed wireless Internet connectivity is having a profound effect on our daily lives. One of the rapidly proliferating usages among consumers is the creation and consumption of high-quality video content. The mobile computer is challenged to provide superior visual experience to keep pace with the consumer electronics devices, and at the same time, offer prolonged battery life to allow truly un-tethered mobile usage. Currently, the display subsystem of a typical notebook consumes as much as half of the overall system average power. Laptop displays also exhibit blurred video playback, due to the ôsample and holdö driving architecture of the LCD and slow response times of the liquid crystals. Intel has developed a suite of advanced video processing algorithms and system-level display power-quality optimization technologies. Integrated in the graphics controller chipsets, these technologies result in superior video quality and reduced display power consumption for longer battery life, thereby significantly enhancing the visual experience for the users of the next-generation mobile computers.

Inventing and Prototyping Social Devices
Michael Winter, Stupid Fun Club, February 5, 2008.

Where is the "House of the Future" and why are embedded systems so lonely? In an effort to answer these questions, Stupid Fun Club is exploring social appliances (appliance to human and appliance to appliance). Enabling technologies are: embedded controllers, cheap sensors, powerful networking, practical speech recognition and quality text to speech.

Areas to be discussed: Real-time data acquisition, debugging complex systems, Zigbee appliance communication standard and clear text APIs. Also covered will be appliance companionship, personalities, learning, gossiping and lying. Developer development will be explored: reading material for the engineer, the fun of failing, Google Lunar Prize, Pleo the dinosaur, blinky lights, pretty faces, Zeno the robot kid, Gadgetoff, Hollywood, Brian Eno, Tom Waits, "what are the kids doing?", and hanging out in Oakland with Colt 45 and MoonBot.

A Hierarchical Coordination Language for Reliable Real-Time Tasks
Arkadeb Ghosal, UC Berkeley, January 22, 2008.

Complex requirements, time-to-market pressure and regulatory constraints have made the designing of embedded systems extremely challenging. This is evident by the increase in effort and expenditure for design of safety-driven real-time control-dominated applications like automotive and avionic controllers. Design processes are often challenged by lack of proper programming tools for specifying and verifying critical requirements (e.g. timing and reliability) of such applications. Platform based design, an approach for designing embedded systems, addresses the above concerns by separating requirement from architecture. The requirement specifies the intended behavior of an application while the architecture specifies the guarantees (e.g. execution speed, failure rate etc). An implementation, a mapping of the requirement on the architecture, is then analyzed for correctness. The orthogonalization of concerns makes the specification and analyses simpler. An effective use of such design methodology has been proposed in Logical Execution Time (LET) model of real-time tasks. The model separates the timing requirements (specified by release and termination instances of a task) from the architecture guarantees (specified by worst-case-execution time of the task).

This dissertation proposes a coordination language, Hierarchical Timing Language (HTL), that captures the timing and reliability requirements of real-time applications. An implementation of the program on an architecture is then analyzed to check whether desired timing and reliability requirements are met or not. The core framework extends the LET model by accounting for reliability and refinement. The reliability model separates the reliability requirements of tasks from the reliability guarantees of the architecture. The requirement expresses the desired long-term reliability while the architecture provides a short-term reliability guarantee (e.g.~failure rate for each iteration). The analysis checks if the short-term guarantee ensures the desired long-term reliability. The refinement model allows replacing a task by another task during program execution. Refinement preserves schedulability and reliability, i.e., if a refined task is schedulable and reliable for an implementation, then the refining task is also schedulable and reliable for the implementation. Refinement helps in concise specification without overloading analysis.

Along with the formal HTL model and subsequent analyses (both with and without refinement), the work presents a compiler that accepts an HTL program. The compiler checks composition and refinement constraints, performs schedulability and reliability analyses, and generates code for replicated, distributed implementation of the HTL program on a virtual machine. Three real-time controllers, one each from automatic control, automotive control and avionic control, are used to illustrate the steps in modeling and analyzing HTL programs.

Algorithms for an Autonomous Car
Edwin Olson, MIT CSAIL, January 8, 2008.

The DARPA Urban Challenge asked: is it possible to build an autonomous car that can safely navigate a traffic-filled urban environment? In November 2007, 35 teams met in Victorville, CA to find out the answer. This talk will describe Team MIT's vehicle, which was one of only six vehicles to successfully finish the course. The basic architecture of the system (from road detection to path planning) will be described, along with some of the algorithms that were used. This talk will highlight the obstacle detection and tracking system, which combined data from over twenty sensors including RADARs and laser range finders.

2007 presentations

Reducing Energy consumption in Wireless Sensor Networks
Carlo Fischione, UC Berkeley, December 11, 2007.

Wireless Sensor Networks (WSNs) are ad hoc networks of tiny, autonomous devices equipped with wireless communication and sensing capabilities for networking, control and monitoring purposes. WSNs are characterized by limited resources in terms of communication, computation and energy supply. The need of achieving maximum efficiency from these systems motivates the challenge of a careful design that integrates functionalities traditionally considered to be separated, as communication, computation, and control. Indeed, despite major recent advances in each of these areas, several open questions have to be explored for the overall design of WSNs. In particular, the limited amount of processing resources of wireless nodes has a significant impact on many aspects of the system. It is expected that significant performance improvement can be achieved by exploiting the advantages offered by joint design of communication, control, and computation in WSNs, using the Platform Based Design approach.

In this talk, we illustrate such a design methodology for the problem of energy consumption minimization of WSNs. Specifically, we present some relevant aspects of two coding schemes available in the literature: the Minimum Energy (ME) coding, and the Modified Minimum Energy (MME) coding. We start from a detailed and general model of the energy consumption of the nodes, which is a function of the hardware characteristics of the transceivers, the coding technique, the radio transmit powers, and the dynamics of the wireless channel. Then, we propose a new method to minimize the energy by optimizing all the variables of the model. In particular, we suggest an optimal way to design the detection thresholds of the receivers. Exploiting the limited feed-back of the channel state, and resorting to the theory of the contractive functions, a distributed radio power minimization algorithm is derived. The algorithm is shown to be stable and it requires low computational processing, making it amenable for implementation on limited hardware platforms. Next, an analysis of the bit error probability is addressed. Finally, the theoretical framework is applied to real WSNs to illustrate the tradeoffs between ME and MME coding.

From Specifications to Systems
Orna Kupferman, Hebrew University, December 4, 2007.

In system synthesis, we transform a desired specification into a system that is guaranteed to satisfy the specification. The talk will survey the development of synthesis algorithms for reactive systems. We will start with synthesis of closed systems (systems that do not interact with an environment, making the synthesis problem similar to the satisfiability problem), and will reach synthesis of open systems (systems that interact with an environment, making the synthesis problem similar to the problem of generating winning strategies in two-player games). The talk assumes no previous knowledge in specification and synthesis.

Communication Synthesis with Applications to On-Chip Communication and Building Automation Systems
Alessandro Pinto, UC Berkeley, November 27, 2007.

Integration is the next big challenge in Embedded System Design (ESD). The complexity of electronics, from chips to building automation systems, increases at each product generation, while time-to-market shrinks. To overcome the possible barriers coming from a linear increase of productivity, it has become common to assemble systems out of off-the shelf components. This approach to ESD brings up many issues that are not related to the components themselves but rather to the way in which they interact. The problem of interconnecting components correctly appears at all levels of abstraction. At the design entry level, each piece of the system is usually described with the model of computation that most naturally captures its behavior. When the entire system is put together, the interaction between different parts requires the definition of the interaction among heterogeneous models. At lower levels of abstraction, the problems reduces to the design of the actual communication architecture that allows the components to communicate. Correctness at this level is defined also with respect to performance metrics like latency, bandwidth and reliability.

Thus, given a collection of agents and the way in which they communicate (usually specified by a set of point-to-point communication requirements) the embedded system designer faces the problem of defining a communication structure such that the communication requirements are satisfied and the cost is minimized. This problem is not trivial considering the number of different ways in which the abstract point-to-point specification can be refined into a concrete communication structure. However, the flexibility provided by the plethora of communication technologies available today, is a great opportunity to ôguarantee performance at minimum costö: a motto for the embedded system community.

I will present our on-going research effort toward the development of an infrastructure for networked embedded systems. I will motivate our work using two design drivers: on-chip communication and building automation systems, and I will argue that the same theory can be applied independently from the specific application. Then, I will show how the theory has been used to develop a tool for the automatic design of communication structures and I will present some preliminary results. I will conclude with a list of research directions.

The Theory of Fast and Robust Adaptation
Naira Hovakimyan, Virginia Tech, November 13, 2007.

The history of adaptive control systems dates back to early 50-s, when the aeronautical community was struggling to advance aircraft speeds to higher Mach numbers. In November of 1967, X-15 launched on what was planned to be a routine research flight to evaluate a boost guidance system, but it went into a spin and eventually broke up at 65,000 feet, killing the pilot Michael Adams. It was later found that the onboard adaptive control system was to be blamed for this incident. Exactly thirty years later, fueled by advances in the theory of nonlinear control, Air Force successfully flight tested the unmanned unstable tailless X-36 aircraft with an onboard adaptive flight control system. This was a landmark achievement that dispelled some of the misgivings that had arisen from the X-15 crash in 1967. Since then, numerous flight tests of Joint Direct Attack Munitions (JDAM) weapon retrofitted with adaptive element have met with great success and have proven the benefits of the adaptation in the presence of component failures and aerodynamic uncertainties. However, the major challenge related to stability/robustness assessment of adaptive systems is still being resolved based on testing the closed-loop system for all possible variations of uncertainties in Monte Carlo simulations, the cost of which increases with the growing complexity of the systems. This presentation will give an overview of the limitations inherent to the conventional adaptive controllers and will introduce a new thinking for adaptive control that leads to fast and robust adaptation with provable control specifications and guaranteed stability/robustness margins. Various applications will be discussed during the presentation to demonstrate the tools and the concepts.

Using the Principles of Synchronous Languages in Discrete-event and Continuous-time Models
Edward Lee, UC Berkeley, October 23, 2007.

In this talk, I outline semantic models for discrete-event (DE) systems and continuous-time (CT) systems as generalizations of synchronous/reactive (SR) semantics. SR is based on solutions of fixed point equations over a flat partial order at ticks of a global clock. In our version of DE systems, a similar fixed point semantics is used, but there is a notion of the passage of time between ticks of the global clock. Specially, we use superdense time, where a time stamp is a pair (t,n), where t is a non-negative real number, and n is a natural number. SR becomes a special case where t=0. In DE, there is also a premise that for any system component, absent inputs produce absent outputs. This observation leads to an optimization that amounts to the usual event queue scheduling used in discrete-event simulators. For CT models, we use the same superdense model of time (to fully support hybrid systems), but allow values to evolve continuously between discrete times. We describe an ideal-solver semantics that gives a rigorous abstract semantics to such systems, and show how practical ODE solvers approximate this semantics. We show that by building all three semantic models on a common foundation it becomes easy and clean to combine CT, DE, and SR models heterogeneously.

Design of Robust Dynamic Networks
Andrzej Banaszuk, United Technologies, October 16, 2007.

We will present an approach that could be used to radically accelerate the numerical simulation, model reduction, and quantification of uncertainty in large multi-scale nonlinear networks of interconnected dynamic components. In this approach a large network is decomposed into subcomponents using spectral graph theory. Operator theory and geometric dynamics methods are used to analyze propagation of uncertainty in subcomponents. We will also show how this approach could enable model-based analysis and design of robust aerospace and building systems by managing interconnections between components. This research is conducted by DARPA Robust Uncertainty Management team including UTRC, UCSB, Caltech, Stanford, Yale, and Georgia Tech. Current job opportunities at UTRC in related areas will also be presented.

From Actors to Gates
Jorn Janneck, Xilinx Research Labs, October 9, 2007.

The growing complexity and capability of programmable logic devices generates a need for languages, methodologies and tools that better support the projection of complex algorithms and applications into these highly parallel machines than traditional hardware design does.

At Xilinx, we are working on tools that allow users to program FPGAs using a dataflow methodology. A key component is the translation of actors, written in a dataflow actor language such as CAL, into synthesizable RTL. In this talk I will discuss the architecture of that tool and discuss some details of its operation.

Ingredients for Successful System Level Automation & Design Methodology - Support for Multiple Models of Computation, Directed test case generation, Reflection & Introspection and Service-oriented tool integration environment
Hiren Patel, UC Berkeley, October 4, 2007.

Recent proliferation of various system level design methodologies, languages, and frameworks propose many alternative ways to model, simulate, verify and realize complex computing systems. Currently, embedded systems are not only complex in functionality and large in size, but they also contain heterogeneous components interacting with each other. For example, a system-on-chip based embedded system may contain digital signal processing components to process streaming data, computational components to perform specific calculations, and micro-controllers to manage the interaction between these components, and analog-digital converters to interact with the environment. To cope with complexity and heterogeneity of such systems, a number of methodologies have been developed into system level design languages and frameworks for modeling and simulation, together with tools and design environments. Although each one of these system level design languages and frameworks offer attractive methodologies individually, we believe a successful methodology requires a few essential ingredients to be confluent in a single design framework. We illustrate our ideas that we consider essential ingredients through proof-of-concept extensions to SystemC. This talk will mainly provide an overview on heterogeneous extensions of SystemC and a directed test case generation approach for SystemC.

Graphical System Design
David Fuller, National Instruments, September 26, 2007.

The number of domain experts developing embedded systems has increased significantly in the last few years. Development tools must evolve to enable domain experts to analyze their problems rapidly, and then design, prototype and deploy solutions. We have found that graphical programming and graphical system modelling are very useful for this purpose and that with these graphical methodologies, end users can easily design and program systems composed of FPGAs and embedded processors for control and measurement applications.

We will present the lessons we have learned developing graphical design tools and we will share our vision of how such tools may evolve into higher-level engineering suites focused on the graphical creation of embedded and industrial systems. We will discuss the visual modelling of relationships between software and hardware resources, the relationship of structured dataflow to other Models of Computation, the limits of structured dataflow, how we plan to address these limits, and how we plan to model the relationships between time and order.

To demonstrate our results and future vision, we will use LabVIEW, which has evolved over the last twenty years into the most popular graphical programming language for Test and Measurement applications, and early prototypes of our next generation environment, recently developed in our labs.

Stochastic Omega-Regular Games
Krishnendu Chatterjee, UC Berkeley, September 25, 2007.

Dynamic games played on game graphs with omega-regular winning conditions provide the theoretical framework for the study of controller synthesis and multi-process verification. The strictly competitive (zero-sum) game formulation is appropriate for controller synthesis. We study such stochastic dynamic games with canonical forms of omega-regular conditions, and show that stochastic games with parity, Rabin, Streett and Mueller objectives are in NP and coNP, NP-complete, coNP-complete, and PSPACE-complete, respectively. The previous best known bounds were 2EXPTIME for Rabin, Streett, and Mueller objectives. We also present strategy improvement algorithms for the above games.

We also study the more general setup of nonzero-sum games. We prove existence of Nash equilibria in turn-based stochastic games, and epsilon-Nash equilibria in several restricted classes of more general recursive games, for all epsilon>0.

A Multi-Threaded Reactive Processor
Reinhard von Hanxleden, Christian-Albrechts-Universitat (CAU) Kiel, September 18, 2007.

Many embedded systems belong to the class of reactive systems, which continuously react to inputs from the environment by generating corresponding outputs. The programming of reactive systems typically requires the use of non-standard control flow constructs, such as concurrency or exception handling. The synchronous language Esterel has been developed to express reactive control flow patterns in a concise manner, with a clear semantics that imposes deterministic program behavior under all circumstances. However, classical Esterel synthesis approaches suffer from the limitations of traditional processors, with their instruction set architectures geared towards the sequential von-Neumann execution model, or they are very inflexible if HW synthesis is involved.

Recently, another alternative for synthesizing Esterel has emerged, the reactive processing approach. This talk presents a multi-threaded reactive processor, the Kiel Esterel Processor (KEP). The KEP Instruction Set Architecture supports reactive control flow directly, and provides a very dense encoding; code size is typically an order of magnitude smaller than that of the MicroBlaze, a 32-bit COTS RISC processor core. The worst case reaction time is typically improved by 4x, and energy consumption is also typically reduced to a quarter. Apart from efficiency and determinism concerns, another advantage of reactive processors is that due to their comparatively simple structure (no caches, no pipelining) and their direct implementation of the synchronous model of computation it becomes feasible to precisely characterize their timing behavior.

The Timing Definition Language (TDL) domain in Ptolemy
Stefan Resmerita, University of Salzburg, September 13, 2007.

Ptolemy II is well positioned to deal with embedded control systems, which generally consist of distributed collections of interacting time-triggered and event-triggered computational tasks that also interact with a physical environment, which can be usually represented by differential equations. We present an experimental Ptolemy domain for modeling time-triggered computations adhering to the semantics of the Timing Definition Language (TDL), which allows the specification of real-time components that rely on the Logical Execution Time (LET) abstraction introduced in the Giotto project. We show how the TDL domain in Ptolemy can be used for modeling and simulation of embedded control systems involving compositions of time-triggered, discrete event, and continuous time actors. The integration also enables what we call transparent distribution of TDL components whose functionality has been modeled with the existing Ptolemy domains. Thus, TDL components modeled in Ptolemy can be automatically mapped to a FlexRay cluster or any other potentially distributed platform supported by the TDL:VisualDistributor tool.

Problems in Resource Modeling and Scheduling for Embedded Systems
Feng Zhao, Microsoft Research, September 11, 2007.

Multi-processor multi-radio platforms offer an attractive way to customize aggregation of computation and communication resources for real-time embedded apps while keeping the resource use minimal. I will discuss problems arising from modeling such systems and components, and describe our on-going work in developing the mPlatform and techniques that exploit the performance-resource trade-offs for energy-aware real-time behaviors.

Symbolic Reachability Analysis of Lazy Linear Hybrid Automata
Susmit Jha, UC Berkeley, September 4, 2007.

Lazy linear hybrid automata (LLHA) model the discrete time behavior of control systems containing finite-precision sensors and actuators interacting with their environment under bounded inertial delays. We present a symbolic technique for reachability analysis of lazy linear hybrid automata. The model permits invariants and guards to be nonlinear predicates but requires flow values to be a finite set of constants. Assuming finite precision, flows represented by uniform linear predicates can be reduced to those containing values from a finite set of constants. We present an abstraction hierarchy for LLHA. Our verification technique is based on bounded model checking and k-induction for reachability analysis at different levels of the abstraction hierarchy within an abstraction-refinement framework. The counterexamples obtained during BMC are used to construct refinements in each iteration. This technique has been tested on some examples that include Highway Control System and we found that it scales well compared to other tools like Phaver.

A Formal Framework for the Correct-by-construction and Verification of Distributed Time Triggered Systems
Dr. Ramesh, GM India Science Lab, August 28, 2007.

In modern day automobiles, several critical vehicle functions are handled by ECS (Electronics and Control Software) applications. These are distributed, real-time, embedded systems and due to the level of criticality in their operation, they require a high-integrity development and verification process. Model-based development of such applications aims to increase the integrity of these applications through the usage of explicit models employed in clearly defined process steps leading to correct-by-construction artifacts. Ensuring consistency and correctness of models across the various design steps is crucial in model-based development methodologies. This work proposes a formal framework for this purpose. The proposed framework captures models in two stages of development - an initial abstract, centralized functional model and a subsequent model of the distributed system consisting of several ECUs (Electronic Control Units) communicating over a common bus. The proposed framework can be used both for the correct-by-construction of distributed models from the abstract functional model and for the verification of existing distributed implementations against a given functional model. We have demonstrated the utility and effectiveness of the proposed framework with two case studies - one using a conventional cruise control system and the other using a brake-by-wire sub-system.

A Design Flow for the Development, Characterization, and Refinement of System Level Architectural Services
Douglas Densmore, UC Berkeley, May 15, 2007.

Electronic Design Automation (EDA) is experiencing a slow down in growth. This slow down can partially be attributed to the fact that current tools do not adequately address current design concerns. These concerns primarily include increased design complexity, heterogeneous designs, and shrinking time to market windows. The recent introduction of Electronic System Level (ESL) tools has the ability to reverse the growth slowdown and many analysts are bullishly optimistic about their chances. Amongst their claims, ESL tools attempt to increase the abstraction and modularity by which designs can be specified. However, simply because these design styles are introduced, this does not automatically imply an acceptable level of accuracy and efficiency required for widespread adoption and eventual success.


This thesis introduces a design flow which improves abstraction and modularity while remaining highly accurate and efficient. Specifically this work explores a Platform-Based Design approach to model architectural services. Platform-Based Design is a methodology in which purely functional descriptions of a system are transformed top-down via constraints and assigned (or mapped) to architecture services which have their capabilities and costs exported from the bottom up. The meeting point is referred to as a "platform''. Architecture services therefore are a set of library elements characterized by their capabilities (what functionality they support) and cost (execution time, power, etc). The functional behavior (an orthogonal aspect) can be mapped to these services during design simulation. This design flow specifically focuses on how to create architecture service models of programmable platforms (FPGAs for example). These architecture service models are created at the transaction level, are preemptable, and export their abilities to the mapping process. An architecture service library is described for Xilinx's Virtex II Pro FPGA. If this library is used, a method exists to extract the architecture topology to program an FPGA device directly, thus avoiding error prone manual techniques. As a consequence of this programmable platform modeling style, the models can be annotated directly with characterization data from a concurrent characterization process to be described.


Finally, in order to support various levels of abstraction in these architecture service models, a refinement verification flow will be discussed as well. Three styles will be proposed each with their own emphasis (event based, interface based, compositional component based). They are each deployed depending on the designer's needs and the environment in which the architecture is developed. These needs include changing the topology of the architecture model, modifying the operation of the architecture service, and the exploring the tradeoffs between how one expresses the services themselves and the simulation infrastructure which schedules the use of those services.

To provide a proof of concept of these techniques, several design scenarios are explored. These scenarios include Motion-JPEG encoding and an H.264 deblocking filter. The results show that not only is the proposed design flow more accurate and modular than other approaches but also that it prevents the selection of more poorly performing designs or the selection of incorrectly functioning designs through its emphasis on the preservation of fidelity.

Automating Sequential Clock Gating for Power Optimization
Anmol Mathur, Calypto Design Systems, May 1, 2007.
The talk will focus on using output observability based and input stability based don't cares in an RTL-level design to identify redundant writes to state elements (flip-flops, latches, memories). The redundant writes can be eliminated by gating off the clocks of the corresponding state elements under the conditions when the writes are redundant. Such sequential clock gating can significantly reduce switching power in the design by reducing power consumed at the clock gated flops and also by reducing the switching in the logic fed by the clock gated flops. We will focus on the technology challenges in performing functional analysis on a design to automatically generate sequential clock gating opportunities, analyzing the area, delay and power cost of adding additional clock gating logic and verification of sequential clock gating changes. The talk will conclude by describing the capabilities of Calypto's PowerPro CG product that performs sequential clock gating automatically and showing results of power reductions on several real industrial designs.
Closed-loop impulse control and the theory of Hybrid Systems
Alexander B. Kurzhanski, April 24, 2007.
The treatment of an array of recent applied problems on feedback control require to treat systems with jumps, with switching and with resets of the phase coordinates. Apart from that some problems are to be solved in the class of impulse controls, which allow delta - functions and/or their derivatives and are at the same time presented in the form of feedback control strategies. All these effects could be presented within the schemes of Dynamic Programming for impulse control.

In this presentation we introduce the basics of a theory of closed-loop impulse control constructed through appropriate modifications of the Hamiltonian formalism and related Dynamic Programming techniques.

Here the problems are solved in the class of generalized functions which include delta - functions and their derivatives. Such an ideal abstract form of results inevitably requires a physically realizable interpretation which is further presented here in the form of problems solved through the techniques of ?approximate dynamic programming ? also used to formalize the ideal scheme. Presented among the applications is a possible formalization of hybrid system models in terms of the given approach.

Finally some computational methods are discussed aimed at serving an emerging ?calculus of value functions? which are at the core of many feedback control solutions. The approaches used here are based on the investigations of the author with with P.Varaiya at UCB.

Automated Extraction of Inductive Invariants to Aid Model Checking
Mike Case, April 10, 2007.
Many applications, specifically formal model checking, can be aided by inductive invariants, small local properties that can be proved by simple induction. In this work we present a way to automatically extract inductive invariants from a design and then prove them. The set of candidate invariants is broad, expensive to compute, and many invariants can be shown to not be helpful to model checking. In this work, we find invariants to show that a specific targeted state is unreachable. In this way, we can provide a calling application with specific invariants that are few in number and immediately help the problem at hand. This method is applied to interpolation, a state of the art model checker, where invariants are used to refute an error trace to help discard spurious counterexamples.
From Concept to Silicon
Vason P. Srini, April 3, 2007.
Several approaches have been developed during the past three decades to turn ideas and constructive specifications expressed in high level languages to custom hardware (e.g. silicon chips). Some of these approaches have been successful in specific domains such as DSP. In this talk I will focus on component based system design using Simulink and Xilinx's System Generator. I will also discuss the implementation of custom (proprietary) algorithms expressed in Matlab using AccelDSP. I will discuss the challenges in efficiently implementing some of the algorithms.
SAT Sweeping with Local Observability Don't-Cares
Nathan Kitchen, March 20, 2007.
SAT sweeping is a method for simplifying an And-Inverter graph by systematically merging graph vertices from the inputs towards the outputs using a combination of structural hashing, simulation, and SAT queries. Due to its robustness and efficiency, SAT sweeping provides a solid algorithm for Boolean reasoning in functional verification and logic synthesis. In previous work, SAT sweeping merges two vertices only if they are functionally equivalent. In this project, we developed a significant extension of the SAT-sweeping algorithm that exploits local observability don't-cares (ODCs) to increase the number of vertices merged. We use a novel technique to bound the use of ODCs and thus the computational effort to find them, while still finding a large fraction of them. Our reported results based on a set of industrial benchmark circuits demonstrate that ODC-based SAT sweeping results in significantly more graph simplification with great benefit for Boolean reasoning with a moderate increase in computational effort. Future work includes testing the effect of the simplification on runtimes of full-blown verification algorithms.
Selective Term-Level Abstraction Using Type-Inference
Bryan Brady, March 13, 2007.
Term-level verification is a formal technique that seeks to verify RTL hardware descriptons by abstracting away details of data representations and operations. The key to making term-level verification automatic and efficient is deciding what to abstract. Using abstraction in the wrong places will result in spurious counter examples; not using abstraction and precisely modeling RTL at the word-level, will result in large and inefficient verification models. Our initial results indicate that a combination of word-level reasoning with selective term-level abstraction can make verification more efficient. However, creating abstractions by hand is a time-consuming and error-prone process. We present a type-inference based algorithm that will semi-automatically generate abstractions from RTL models. This process avoids human-introduced errors and eliminates the need to manually create a separate verification model. To show that type-inference-based abstraction is useful, we plan on performing Burch and Dill style processor-verification on the OpenSPARC processor. As this is still a work in progress, the main goal of this presentation is to get early feedback.
Models for Data-Flow Sequential Processes
Mark B. Josephs, London South Bank University, March 13, 2007.
A family of mathematical models of nondeterministic data flow is introduced. These models are constructed out of sets of traces, successes, failures and divergences, cf. Hoare's traces model, Roscoe's stable-failures model and Brookes and Roscoe's failures/divergences model of Communicating Sequential Processes. As in CSP, operators are defined that are convenient for constructing processes in the various models.
Control of Hybrid Systems: Theory, Computation and Applications
Manfred Morari, ETH Zurich, March 6, 2007.
Theory, computation and applications define the evolution of the field of control. This premise is illustrated with the emerging area of hybrid systems, which can be viewed, loosely speaking, as dynamical systems with switches. Many practical problems can be formulated in the hybrid system framework. Power electronics are hybrid systems by their very nature, systems with hard bounds and/or friction can be described in this manner and problems from other domains, as diverse as driver assistance systems, anaesthesia and active vibration control can be put in this form. We will highlight the theoretical developments and mention the extensive software that helps to bring the theory to bear on the practical examples. We will close with an outlook for hybrid systems and control more generally.
Time-Portable Real-Time Programming with Exotasks
Christoph Kirsch, University of Salzburg, February 27, 2007.

High Level Mathematical Modeling and Parallel/GRID Computing with Modelica using OpenModelica
Peter Fritzson, Link÷pings universitet, February 20, 2007
Modelica is a general declarative equation-based object-oriented language for continuous and discrete-event modeling of physical systems for the purpose of efficient simulation. The language unifies and generalizes previous object-oriented modeling languages. The Modelica modeling language and technology is being warmly received by the world community in modeling and simulation. It is bringing about a revolution in this area, based on its ease of use, visual design of models with combination of lego-like predefined model building blocks, its ability to define model libraries with re-usable components and its support for modeling and simulation of complex applications involving parts from several application domains.

In this talk we present the Modelica language and its open-source OpenModelica environment including compiler, Eclipse plugin, and an electronic notebook system with the DrModelica self-instructing course material. We also present methods and OpenModelica tooling for automatic extraction of fine-grained parallelism from high-level equation-based models as well as manually specifying coarse-grained parallelism and algorithmic parallelism. Especially the fine-grained parallelism we believe fits well with the coming multi-core architectures. Some measurements from parallel execution are presented and a demo of OpenModelica is given. (see www.ida.liu.se/projects/OpenModelica)

Discrete Event Models: Getting the Semantics Right
Edward A. Lee, February 6, 2007
Discrete event models are systems where components interact via timed events. Although there many languages and simulators with discrete-event semantics (VHDL, OpNet Modeler, ns2, etc.), there is not widespread agreement on the precise semantics. This talk examines a formal foundation for discrete-event systems that clarifies interesting corner cases, where events are simultaneous, ordered or unordered, and where Zeno conditions arise. It introduces analytical tools for studying such cases, lends insight into parallel and distributed execution of discrete-event models, and suggests a unified semantics that is efficient and clean.
SCADA for distributed systems : application to the control of irrigation canals
Xavier Litrico, Cemagref, January 30, 2007
Irrigation represents more than 80% of world fresh water consumption, and most of the irrigation systems use open-channels to convey water from the resource to the users. Open-channel irrigation canals are distributed systems with complex dynamics. Automatic control of such systems may dramatically increase water efficiency, which is critical for such a precious resource.

This talk will present methods developed at Cemagref Montpellier, France, to control and supervise such systems using SCADA. These methods are based on a mecanistic model of the canal (Saint-Venant equations), using the set of sensors and actuators, to provide tools for real-time control and supervision. These tools have been validated on a real canal (Gignac canal), located close to Montpellier.

The series structure of irrigation canals can be used to design mutlivariable controllers based on the aggregation of decentralized monovariable controllers. Classical upstream and downstream control strategies are first studied in terms of stability, performance and robustness. Second, a new structured mixed control scheme is proposed, leading to a fully multivariable controller, with guaranteed stability. This controller enables to mix the advantages of both classical control politics: it recovers the local upstream control performance with respect to unpredicted withdrawals, while at the same time ensuring that the water resource is efficiently managed, as in the downstream control case. It can be implemented in a semi-decentralized way, so that each localized controller only communicates with its closest neighbors. It provides an interesting framework for general design of multivariable controllers for irrigation canals.

Automatic controlled systems needs a supervision, to detect and isolate possible faults. We develop two data reconciliation (DR) methods for the validation of measurements, detection and isolation of sensors faults and reconstruction of missing data: first a static data reconciliation is developed for a hydraulic cross-structure of an irrigation canal, then a dynamic data reconciliation method is applied to take account of measurements at the upstream and downstream end of each pool. The methods are evaluated on realistic measurements from an irrigation canal located in the South of France.

Resource-Aware Programming
Walid Taha, Rice University, January 23, 2007
As embedded systems play an increasingly important role in lives, demand for technologies that can enhance the reliability of such systems will continue to grow. This talk examines why traditional high-level programming languages can be inappropriate for such applications, and introduces the idea of resource-aware programming (RAP) languages. RAP languages bridge the gap between traditional software engineering demands and embedded software development by 1) providing a very expressive macro system that allows arbitrary computations to be done on the development (or configuration) platform, 2) performing *all* type checking before macro expansion, even when macro-expansion depends on configuration-time inputs, and 3) using static typing to ensure that only resource-bounded computations remain after macro expansion. The talk presents recent results from our work in this area and introduces MetaOCaml Concoqtion, an extension of OCaml with indexed types.
A Model-Driven Approach to Embedded Control System Implementation
Jan F. Broenink, University of Twente, January 19, 2007.
The work presented here is on setting up methodological support, including (prototype) tools, for the design of distributed hard real-time embedded control software for mechatronic products. The use of parallel hardware (CPUs, FPGAs) and parallel software is investigated, to exploit the inherent parallel nature of embedded systems and their control.

Two core models of computation are used to descibe the behavior of the total mechatronic system (plant, control, software and I/O): discrete event system (DES) and continuous time system (CTS). They are coupled via co-simulation, to be able to do consistency checking at the boundaries. This allows for integration of discipline-specific parts on the model level (during design phases) instead of on the code level (during realization and test phases). Cross-view design-change influences get specific attention, to allow for relaxation of the tension between several dependability issues (like reliability and robustness), while keeping design time (and thus design costs) under control. Furthermore, the design work can be done as a stepwise refinement process. This yields a shorter design time, and a better quality product. The method is illustrated with a case using the tools being prototyped.

2006 presentations

The Power of Higher-Order Components in System Design
Adam Cataldo, December 12, 2006.
This talk is about higher-order composition languages. A composition language is a language for constructing systems by composing subsystems. A block-diagram language is a typical example. A composition language is "higher-order" if components may be parameterized by other components. Allowing higher-order components has some important practical and theoretical implications which result from the expressiveness of higher-order composition.

The purpose of this talk is to motivate higher-order composition languages and provide a theoretical framework for reasoning about them. I will begin with an overview of some related concepts in functional programming. I will then give a distrusted systems example and a digital circuit example as case studies for higher-order composition. I will explain how partial evaluation techniques separates higher-order composition from other system design concerns, particularly execution semantics. Finally, I will describe a calculus I developed to reason about higher-order composition languages. This calculus formalizes the expressiveness that higher-order components can add to composition languages. I use it to show how large systems can be written more "succinctly" in higher-order composition languages than in non-higher-order composition languages.

Some Results on Optimal Estimation and Control for Lossy Networked Control Systems
Luca Schenato, University of Padova, December 5, 2006.
The rapid convergence of sensing, computing and wireless communication technologies on cost effective, low power, miniature devices, is enabling a host of new applications. In particular, wireless sensor networks have the ability to inexpensively gather data over a network at a very fine temporal and spatial granularity. So far sensor networks have been employed for static monitoring applications, but their ability to process data in real-time could enable a number of new control applications. However, control applications running over these distributed networks have to deal with important issues such as communication delay, data loss, time-synchronization, power management, scaling, just to name a few.

In this talk I will describe some problems related the effect of packets loss and random delay on the design and performance of networked control systems. In particular, I will try to show that communication and control are tightly coupled and they cannot be addressed independently.

For example, where should I place my controller? near the sensors, near the actuators or somewhere in the middle? What is the impact of packet loss on the closed loop performance? Which features should communication protocols have to reduce performance degradation due to packet loss? If sensors and actuators are provided with computational resources, can I use them to improve performance? From a closed-loop performance perspective, is it better to have a protocols with small packet delay and high packet loss or protocols with low packet loss and longer delay? If actuators have no computational resources what should I do when a control packet is loss: use the previous control values (hold-input) or do nothing (zero-input)? These are some of the questions addressed in the presentation. I will propose some possible solutions and comment on their implications on the design of general networked control systems.

Graphical System Design: Bringing Embedded Design to the Masses in Science and Engineering
Hugo A. Andrade and Zach Nelson, November 28, 2006.
Graphical system design is an approach to solving design challenges that blends intuitive graphical programming and flexible commercial-off-the-shelf (COTS) hardware to help engineers and scientists more efficiently design, prototype, and deploy embedded systems. In this presentation we look at a couple of the software and hardware technologies behind the tools that allow targeting very different kinds of devices (e.g. host processors, real-time embedded processors, and reconfigurable hardware elements like FPGAs) within a unified environment. We would also like to motivate some important areas of interaction and collaborative research related to this effort.
Dynamical constrained Impulse system analysis through viability approach and applications
Patrick Saint-Pierre, University Paris Dauphine, November 21, 2006.
Reaching a target while remaining in a given set for impulse dynamics can be characterized by a non deterministic controlled differential equation and a controlled instantaneous reset equation. The set of initial conditions from which a given objective can be reached and the optimal control ensuring both target capture and viability are computed using the Hybrid Guaranteed Capture Basin Algorithm. After recalling how this can be extended for solving optimal control problems in the presence of impulse and in the presence of uncertainty we present several applications relevant to engineering and to finance, using this approach for designing discontinuous controls with respect to time or to state.
SHIM: A Scheduling-Independent Concurrent Language for Embedded Systems
Stephen A. Edwards, Columbia University, November 8, 2006.
Concurrent programming languages should be a good fit for embedded systems because they match the intrinsic parallelism of their architectures and environments. Unfortunately, typical concurrent programming formalisms are prone to races and nondeterminism, despite the presence of mechanisms such as monitors.

We propose an alternative for describing such systems. Our SHIM (software/hardware integration medium) language, which uses Kahn networks with rendezvous communication at its heart, provides scheduling-independent concurrency in an asynchronous setting. It is a C-like imperative language extended with concurrency, inter-process communication, and exceptions.

Compared to the SHIM we presented a year ago, this version of the language includes function calls, exceptions, and a novel way of treating function arguments as communication channels. We illustrate our programming model with examples including buffers and recursive pipelines. By construction, they are race-free. By providing a powerful, deterministic formalism for expressing systems, designing systems and verifying their correctness will become easier.

Port-based Modeling and Control for Efficient Bipedal Walking Machines
Vincent Duindam, October 31 and November 7, 2006.
Control strategies for walking robots are traditionally based on either static walking (using full actuation and enforcing stability at all times) or dynamic walking (using no actuation and relying on the natural tendency of certain mechanisms to walk). In this talk, we try to combine these two strategies into a control approach that relies on natural (unforced) walking motions, but uses active control to increase stability and robustness. We give an overview of some of the modeling, analysis, and control aspects involved: physical modeling of general rigid mechanisms in contact, numerical techniques combining gait search and mechanical design, and asymptotic trajectory tracking using zero-energy control. We also present some preliminary experimental results.
"Using mathematical modeling to help decode biological circuits"
Claire Tomlin (Associate Professor, Dept. of Electrical Engineering and Computer Sciences, UC Berkeley; and Dept. of Aeronautics and Astronautics, Stanford University), October 17, 2006.

In this talk, methods that we have designed to analyze and help to identify certain protein regulatory networks will be presented. Hybrid automata represent one suitable modeling framework, as the protein concentration dynamics inside each cell are modeled using linear differential equations; inputs activate or deactivate these continuous dynamics through discrete switches, which themselves are controlled by protein concentrations reaching given thresholds. We present an iterative refinement algorithm for computing discrete abstractions of a class of symbolic hybrid automata, and we apply this algorithm to a model of multiple cell Delta-Notch protein signaling. The results are analyzed to show that novel, non-intuitive, and biologically interesting properties can be deduced from the computation, thus demonstrating that mathematical modeling which extrapolates from existing information and underlying principles can be successful in increasing understanding of some biological systems.

However, more often, only incomplete abstracted hypotheses exist to explain observed complex patterning and functions in cellular regulatory networks. The second part of the talk will present our results in developing a partial and ordinary differential equation model for Planar Cell Polarity signaling in fly wings. We explicitly demonstrate that the model can explain the complex behaviors of the system, and can be used to test developmental hypotheses, such as the function of certain protein alleles, and the relationship between cell geometry and polarity.

Joint work with Jeff Axelrod (Associate Professor, Stanford University School of Medicine), Keith Amonlirdviman, Ronojoy Ghosh, Robin Raffard, and Anil Aswani.

Control of Hybrid Systems: a Robust Finite State Machine Approach
Danielle C. Tarraf, Caltech, September 29, 2006
Modeling with the Timing Definition Language (TDL)
Wolfgang Pree, University of Salzburg, September 26, 2006.
TDL shares with Giotto the basic idea of the Logical Execution Time (LET), which abstracts from physical execution time and thereby from both the execution platform and the communication topology. This forms the basis for model-based development of real-time software. TDL adds a component model that allows the decomposition of hard real-time applications into modules (= components) that are executed in parallel and that can be transparently distributed. Transparent distribution means that (1) the functional and temporal behavior of a system is the same no matter where a component is executed and (2) the developer does not have to care about the differences of local versus distributed execution of a component. We also present another extension of TDL that improves the support for digital controllers while preserving the LET semantics and deterministic behavior. Though TDL components can be defined in a pure textual way we also provide a visual and interactive development tool suite, which is seamlessly integrated in Matlab/Simulink. We sketch how we ensure that the simulated behavior is equivalent to the execution on a particular platform. A short demonstration of the TDL tool suite rounds out the seminar presentation.
Advanced Visual Tracking with Bayesian Filter,
Li-Chen Fu, NTU Taiwan, August 8, 2006
Many problems in science require estimation of the state of a system that changes over time using a sequence of noisy measurements made on the system. Bayesian filter provides a rigorous general framework for dynamic state estimation problems. Visual tracking is also one kind of this problem. The sensed sequence of 2D image data, which may lose some 3D information and is usually noisy, includes the target, similar objects, and cluttered background, etc. Bayesian filter attempts to construct the posterior probability density function of the target state using the probabilistic formulations. Kalman filter is an optimal solution of Bayesian filter, especially for the linear system model, Gaussian measurement distribution, and pure environment. When the target sensing and localization of computer vision or image processing is not simple and perfect, visual probabilistic data association filter (VPDAF) is desired to track target in this complicated scenario. Joint visual probabilistic data association filter (JVPDAF) is proposed for multi-target visual tracking with measurement-target hypotheses. In order to reduce the computational complexity, particle filter based on the Monte Carlo approximation is applied for visual tracking in practice. Markov chain Monte Carlo (MCMC) based particle filter is also proposed to increase the diversity of particles to handle high dimensional visual tracking problem more powerfully.
A Categorical Theory of Hybrid Systems
Aaron D. Ames, May 9, 2006
Robotic systems undergoing impacts, dynamical systems with non-smooth control laws, networks of embedded and robotic systems--these all are examples of systems that display both discrete and continuous behavior, i.e., systems that can be modeled as hybrid systems. Understanding these systems on a deep level, therefore, has important practical consequences. While the area of hybrid systems has experienced dramatic growth in the last decade, current characterizations of the underlying properties of these systems are extremely limited; this is due to the lack of a unifying mathematical theory for hybrid systems--one that is analogous to the theory of continuous (and discrete) systems.

The goal of this talk is to introduce the theory of hybrid categories--which, I claim, provides a unifying mathematical framework for hybrid systems--along with some of its applications.

I will begin by formalizing the theory of hybrid categories and by demonstrating how this theory allows one to generalize many familiar "non-hybrid objects" to a hybrid setting, e.g., vector spaces and manifolds. With these fundamental concepts in hand, I will outline how this framework allows the translation of results from modern mathematics to the hybrid domain; for example, one can do "homotopy theory" on hybrid categories, and hence hybrid systems, using the theory of model categories. Because this translation would not have been possible without a categorical theory of hybrid systems, the power of this theory is thus demonstrated.

Equally important to the claim that hybrid categories provide a unifying mathematical framework for hybrid systems is the ability of this theory to solve novel problems. Three such examples will be considered: hybrid reduction, Zeno detection and composition. In the first case, geometric reduction can be generalized to a hybrid setting by generalizing all of the ingredients necessary for reduction to a hybrid setting; I will discuss the application of these results to bipedal robotic walkers, i.e., how one can obtain three-dimensional walking through the use of hybrid reduction. To detect Zeno behavior in hybrid systems, I will demonstrate how this behavior can be likened to the stability of a new type of equilibria: Zeno equilibria. Using hybrid categories, sufficient conditions for the stability of these equilibria will be presented, the end result of which is sufficient conditions for the existence of Zeno behavior. Finally, I will illustrate how networks also can be modeled through the use of hybrid categories with an explicit focus on the composition thereof.

Modular Performance Analysis and Interface-Based Design for Real-Time Systems
Ernesto Wandeler, April 11, 2006.
During the system-level design process of a real-time embedded system, a designer typically faces various questions related to the system performance: Do the timing properties of a certain architecture meet the design requirements? What is the on-chip memory demand? What are the different CPU and bus utilizations? Which resource acts as a bottleneck? These and other similar questions are generally hard to answer for embedded system architectures that are highly heterogeneous, parallel and distributed and thus inherently complex. In the first part of this talk I will give an introduction to the field of system-level performance analysis and its challenges, and I will introduce the approach of Modular Performance Analysis with Real-Time Calculus that we developed at ETH Zurich. In the second part of this talk, I will introduce the theory of Real-Time Interfaces, an approach to combine the areas of Interface-Based Design and Real-Time Calculus for interface-based design of real-time systems.
Causality Interfaces
Rachel Zhou, March 7, 2006.
Semantic Foundation of the Tagged Signal Model
Xiaojun Liu, February 22, 2006.
HOPES: Embedded Software Development for MPSoC
Soonhoi Ha, February 14, 2006.
The JAviator Project
Christoph Kirsch, February 7, 2006.
Theoretical and Practical Challenges of LXI and IEEE 1588 Measurement Systems
John C. Eidson, January 24, 2006.
Observability of Hybrid Systems and applications to ATM (Powerpoint presentation)
Alessandro D'Innocenzo, December 6, 2005.

2005 presentations

Achievable Bisimilar Behaviours of Abstract State Systems
Giordano Pola, December 6, 2005.
Approximation Metrics for Discrete, Continuous and Hybrid Systems (Powerpoint presentation)
Antoine Girard, November 29, 2005.
Semantics-Based Optimization Across Uncoordinated Tasks in Networked Embedded Systems (Powerpoint presentation and accompanying video)
Elaine Cheong, November 15, 2005.
Programmable Internet Environment (Powerpoint presentation)
Sinisa Srbljic, November 8, 2005.
The Teja Model of Computation
Akash Deshpande, November 1, 2005.
Semantic Anchoring
Ethan Jackson, October 18, 2005.
Reasoning about Timed Systems Using Boolean Methods
Sanjit A. Seshia, October 11, 2005.
The Ultraconcurrency Revolution in Hardware and Software
Carl Hewitt, October 4, 2005.
Causality Interfaces and Compositional Causality Analysis
Haiyang Zheng, September 20, 2005.
A Structural Approach to Quasi-Static Schedulability Analysis of Communicating Concurrent Programs
Cong Liu, September 13, 2005.
Tools for the Simulation, Verification and Synthesis of Hybrid Systems
Alessandro Pinto, September 6, 2005.
Optimal Scheduling of Acyclic Branching Programs on Parallel Machines
Oded Maler, August 30, 2005.
Coordinated Component Composition (Powerpoint presentation)
Farhad Arbab, August 25, 2005.
Mode Transition Behavior in Hybrid Dynamic Systems
Pieter J. Mosterman, August 16, 2005.
On the Complexity of Multi-Modal Control Procedures (Powerpoint presentation)
Magnus Egerstedt, August 1, 2005.
Promoting reuse and repurposing on the Semantic Grid (Powerpoint presentation)
Antoon Goderis, July 19, 2005.
Programming Mobile Phones
Thomas Riisgaard Hansen, May 18, 2005.
Architecture Modeling and Refinement Verification in Metropolis (Powerpoint presentation)
Doug Densmore, May 3, 2005.
A Flexible User Centered Component Architecture for NASA Mission Operations
Jay Trimble, May 3, 2005.
Algorithms for the Analysis of Continuous Dynamical Systems Based on Reachability Computation
T. John Koo, April 19, 2005.
Tools for the Analysis of Networked Hybrid Systems (Powerpoint presentation)
Hamsa Balakrishnan, April 12, 2005.
Ensuring Data Integrity in Sensor-based Networked Systems
Farinaz Koushanfar, April 5, 2005.
Future Directions: Design Space Exploration and the Metropolis II Framework (Powerpoint presentation)
Abhijit Davare, March 29, 2005.
Paradigm Shift in the Electronics Industry
Tsugio Makimoto, March 15, 2005.
Transparent Distribution based on LET
Wolfgang Pree, March 8, 2005.
Discounting the Future in Systems Theory
Tom Henzinger, March 1. 2005.
Formal Methods for Swarm Technologies
Mike Hinchey, Feb.22, 2005.
Faulty Tolerant Data Flow Modeling in the Generic Modeling Environment
Mark McKelvin, Feb. 15, 2005.
The Metamodeling of a Visual Language for Describing Instruction Sets and Generating Decoders
Trevor Meyerowitz, Feb. 8, 2005.
Reachability and Control of Linear Systems (revisited)
Alex Kurzhanskiy, Feb.1, 2005.
The FACT PARADIGM, Applications to Online Utility-Based Control (Powerpoint presentation)
G. Biswas, Jan. 26, 2005.
IEEE 1588: Applications in Measurement, Control, and Communication (Powerpoint presentation)
John C. Eidson, Jan. 18, 2005

2004 presentations

Optimal Control of Networked Systems with Unreliable Links
Bruno Sinopoli, Dec. 7, 2004.
Actor-Oriented Metaprogramming (Powerpoint presentation)
Steve Neuendorffer, Dissertation seminar, November 23, 2004.
Operational Semantics of Hybrid Systems (Powerpoint presentation)
Edward A. Lee, November 16, 2004.
A Categorical Theory of Hybrid Systems
Aaron D. Ames, November 9, 2004.
Stochastic Approximations of Deterministic Hybrid Systems
Alessandro Abate, November 2, 2004.
An Introduction to Hybrid Systems
Aaron D. Ames, November 1, 2004.
Interchange Formats for Hybrid Systems (Powerpoint presentation)
Alessandro Pinto, October 19, 2004.
Formal Methods unifying Computing Science and Systems Theory
Raymond Boute, October 12, 2004.
Modeling and Analysis of Protein Signaling Networks Using Hybrid Automata (Powerpoint presentation)
Ronojoy Ghosh, October 5, 2004.
Managing Scientific Data: From Data Integration to Scientific Workflows (and back) (Powerpoint presentation)
Bertram Ludaescher, September 28, 2004.
Software Enabled Control Capstone Demonstration: Pursuit/Evasion Games with Fixed Wing Aircraft (Powerpoint presentation)
Mike Eklund, September 21, 2004.
Forgetting UML, A Useful Guide to Formal Modeling
Jonathan Sprinkle, September 14, 2004.
Conflict Resolution in Multi-Agent Systems (Powerpoint presentation)
Stefan Resmerita, September 8, 2004.
Extending AutoFocus: Dynamic structure, time, and computation (Powerpoint presentation)
Bernhard Schaetz, September 7, 2004.
FlexCC2: An Optimizing Retargetable C Compiler for DSP Applications (Powerpoint presentation)
Jean-Marc Daveau, June 14, 2004.
Coordination Strategies for Correct-by-Construction Design of SOCs and Embedded Systems
Luca Carloni, May 11, 2004.
Blowing Up Hybrid Systems
Aaron Ames, May 4, 2004.
Wireless Sensor Network based Shooter Localization (Powerpoint presentation)
Miklos Maroti, April 27, 2004.
StreamBit: Sketching high-performance implementations of bitstream programs (Powerpoint presentation)
Armando Solar-Lezama, April 20, 2004.
Finding and Preventing Run-Time Error Handling Mistakes (Powerpoint presentation)
Westley Weimer, April 6, 2004.
Embedded Reasoning for Highly Reconfigurable Systems (Powerpoint presentation)
Markus Fromherz, March 30, 2004.
Hierarchical Reconfiguration of Dataflow Graphs (Powerpoint presentation)
Stephen Neuendorffer, March 16, 2004.
An Algorithm for Verifying Functional Specifications of Real-Time Systems
Joel Ouaknine, March 9, 2004
Actor-Oriented Design (Powerpoint presentation)
Edward A. Lee, March 2, 2004
Static Analysis of Relations between Objects
Viktor Kuncak, February 24, 2004
Brook for GPUs: Stream Computing on Graphics Hardware (Powerpoint presentation)
Ian Buck, February 17, 2004
Reachability analysis of Process Rewrite Systems : Application to the verification of multithreaded recursive programs
Tayssir Touli, February 10, 2004.
From Spaghetti to Ravioli: Re-factoring the LaserJet codebase
Keith Moore, February 3, 2004.

2003 presentations

A Type System Equivalent to a Model Checker
Jens Palsberg, December 2, 2003.
Domain-Specific Modeling (Powerpoint presentation)
Jonathan Sprinkle, November 25, 2003.
State-Centric Programming for Sensor Networks (Powerpoint presentation)
Jie Liu, November 18, 2003.
Giotto: A Time-Triggered Language for Embedded Programming (Powerpoint presentation)
Ben Horowitz, November 12, 2003.
Bluespec: Why chip design can't be left to EE's (Powerpoint presentation)
Arvind, November 6, 2003.
SPARK: A Parallelizing High-Level Synthesis Framework (Powerpoint presentation)
Sumit Gupta, October 14, 2003.
Computational Control of networks of dynamical systems: Application to the National Airspace System
Alexandre Bayen, October 7, 2003.
Software Development in the Academic Environment (Powerpoint presentation)
Christopher Hylands Brooks, September 30, 2003.
Heterogeneous Reactive Models and Correct-by-Construction Deployment (Powerpoint presentation)
Alberto Sangiovanni-Vincentelli, September 23, 2003.
Salzburg's emLab
Wolfgang Pree, September 16, 2003.
Embedded Software: Better Models, Better Code (Powerpoint presentation)
Tom Henzinger, September 9, 2003.
Streaming Models of Computation in The Ptolemy Project (Powerpoint presentation)
Edward Lee, September 2, 2003.
System Design Using Kahn Process Networks: The Compaan/Laura Approach
Bart Kienhuis, May 13, 2003.
MESCAL: Design Support for Embedded Processors and Applications
Andrew Mihal, May 8, 2003.
Design of safety-critical distributed applications (Powerpoint presentation)
Claudio Pinello, April 22, 2003.
The Metronome: A Hard Real-time Garbage Collector (Powerpoint presentation)
David F. Bacon, March 18, 2003.
Automatic Verification of Software Systems (Powerpoint presentation)
Rupak Majumdar, March 11, 2003.
Strauss: a temporal-specification miner
Ras Bodik, March 4, 2003.
Some Features of HyVisual (Powerpoint presentation)
Haiyang Zheng, February 25, 2003.
Modeling Hybrid Systems in HyVisual (Powerpoint presentation)
Xiaojun Liu, February 25, 2003.
Model-based Development with Giotto@Simulink
Wolfgang Pree, February 18, 2003.
Synchronous Vehicle Formation Control: Some difficulties and new mathematical results (Powerpoint presentation)
Raja Sengupta, February 11, 2003.
The Embedded Software Lab
Christoph M. Kirsch, February 4, 2003.

2002 presentations

Research on Embedded System Design at L'Aquila
M.D. Di Benedetto, December 3, 2002.
Application of Level Set Methods to Control & Reachability Problems in Continuous & Hybrid Systems
Ian Mitchell, November 19, 2002.
Mechatronics and Embedded Computing
David M. Auslander, November 5, 2002.
Hybrid Systems; Modeling, Analysis, Control (Powerpoint presentation)
Shankar Sastry, October 29, 2002.
Embedded Systems in the Automotive Industry
Karl Hedrick, October 22, 2002.
Embedded System Design: Foundations, Methodologies, and Applications (Powerpoint presentation)
Alberto Sangiovanni-Vincentelli, October 15, 2002.
Research Overview (Powerpoint presentation)
George Necula, October 1, 2002.
The Mescal Project (Powerpoint presentation)
Kurt Keutzer, September 24, 2002.
Ptolemy Project within Chess (Powerpoint presentation)
Edward A. Lee, September 17, 2002.
Model-based Design of Embedded Software (Powerpoint presentation)
Tom Henzinger, September 10, 2002.
Research Overview (Powerpoint presentation)
Alex Aiken, September 3, 2002.

CHESS Logo

Key Resources:

* Mission statement and organizational overview.
* Resources for CHESS researchers (presentation and poster templates, etc.).
* Visitor parking, transportation, directions, lodging .
This center is aimed at developing model-based and tool-supported design methodologies for real-time fault tolerant software on heterogeneous distributed platforms. We are bridging the gap between computer science and systems science by developing the foundations of a modern systems science that is simultaneously computational and physical. This represents a major departure from the current, separated structure of computer science (CS), computer engineering (CE), and electrical engineering (EE): it reintegrates information and physical sciences. The center was initially funded in part by an Information Technology Research (ITR) project from the National Science Foundation (NSF) and now receives funding from government and industry.

 

What Are Cyber-Physical Systems?

The Center for Hybrid and Embedded Software Systems (CHESS) is building foundational theories and practical tools for systems that combine computation, networking, and physical dynamics. In such systems, embedded computers and networks monitor and control physical processes in feedback loops where physical processes affect computations and vice versa. For the last 30 years or so, computers have been increasingly embedded in stand-alone, self-contained products. We are poised, however, for a revolutionary transformation as these embedded computers become networked. The transformation is analogous to the enormous increment in the utility of personal computers with the advent of the web. Just as personal computers changed from word processors to global communications devices and information portals, embedded computers will change from small self-contained boxes to cyber-physical systems, which sense, monitor and control our intrinsically distributed human environment.

Cyber-Physical Systems (CPS) are integrations of computation, networking, and physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. The economic and societal potential of such systems is vastly greater than what has been realized, and major investments are being made worldwide to develop the technology. There are considerable challenges, particularly because the physical components of such systems introduce safety and reliability requirements qualitatively different from those in general-purpose computing. Moreover, the standard abstractions used in computing do not fit the physical parts of the system well.

Applications of CPS arguably have the potential to dwarf the 20th century IT revolution. They include high confidence medical devices and systems, assisted living, traffic control and safety, advanced automotive systems, process control, energy conservation, environmental control, avionics, instrumentation, critical infrastructure control (electric power, water resources, and communications systems for example), distributed robotics (telepresence, telemedicine), defense systems, manufacturing, and smart structures. It is easy to envision new capabilities, such as distributed micro power generation coupled into the power grid, where timing precision and security issues loom large. Transportation systems could benefit considerably from better embedded intelligence in automobiles, which could improve safety and efficiency. Networked autonomous vehicles could dramatically enhance the effectiveness of our military and could offer substantially more effective disaster recovery techniques. Networked building control systems (such as HVAC and lighting) could significantly improve energy efficiency and demand variability, reducing our dependence on fossil fuels and our greenhouse gas emissions. In communications, cognitive radio could benefit enormously from distributed consensus about available bandwidth and from distributed control technologies. Financial networks could be dramatically changed by precision timing. Large scale services systems leveraging RFID and other technologies for tracking of goods and services could acquire the nature of distributed real-time control systems. Distributed real-time games that integrate sensors and actuators could change the (relatively passive) nature of on-line social interactions. The positive economic impact of any one of these applications areas would be enormous.

 

News:

* February 13, 2015: CHESS, iCyPhy and TerraSwarm Lunch and Poster Session at the Berkeley EECS Annual Research Symposium (BEARS)
* December 17, 2014: Ptolemy II 10.0.1 online demonstrations, documentation and software released.
* June 6, 2014: Denso has joined CHESS, see the denso workgroup.
* February 13, 2014: CHESS, iCyPhy, SwarmLab and TerraSwarm Lunch and Poster Session at the Berkeley EECS Annual Research Symposium (BEARS)
* November 7, 2013: Tenth Biennial Ptolemy Miniconference
* April 8th, 2013: The First Workshop on Cyber-Physical Systems Education was held in Philadelphia, Pennsylvania.
* March 10, 2013: JFMI 1.0.1, a Java Wrapper for the Functional Mock-up Interface was released. The Functional Mock-up Interface (FMI) is an emerging standard with the objective to make it easy to share modeling components between various simulation tools and environments.
JFMI is a Java wrapper for FMI.
This release fixes a bug involving reading booleans and adds a facility that builds shared libraries at runtime.
* February 14, 2013: As part of the Berkeley EECS Annual Research Symposium (BEARS), the Center for Hybrid and Embedded Software Systems (CHESS) held a Lunch and Poster Session in 545 Cory Hall in conjuction with the Donald O. Pederson Center for Electronic Systems Design.
* January 17, 2013 UC Berkeley-Led Team Awarded $27.5M for New TerraSwarm Research Center
* October 25-26, 2012: The NITRD National Workshop on The New Clockwork for Time-Critical Systems was held in Baltimore.
* June 18, 2012: JFMI-1.0, a Java Wrapper for the Functional Mock-up Interface was released. The Functional Mock-up Interface (FMI) is an emerging standard with the objective to make it easy to share modeling components between various simulation tools and environments.
* February 23, 2012: As part of the Berkeley EECS Annual Research Symposium (BEARS), the Center for Hybrid and Embedded Software Systems (CHESS) held a Lunch and Poster Session in 545 Cory Hall in conjuction with the Donald O. Pederson Center for Electronic Systems Design.
* January 27, 2012: Pt1588: an open source implementation of IEEE 1588-2008 made available.
* November 17, 2011: Dr. Patricia Derler has been awarded her degree sub auspiciis Praesidentis rei publicae (under the auspices of the President of Austria). Wikipedia's Latin Honors page says that only one out of 2500 doctoral graduates per year achieve this honor. Dr. Derler was awarded a golden honorary ring of Austria by Heinz Fischer, the president of Austria. Dr. Derler is a postdoctoral visitor focusing on the PTIDES effort.
(Original Article | Translation)
* November 6, 2011: The UC Berkeley team (Michael Zimmer and Baruch Sterin) tied for first place at the CADathalon@ICCAD. "In the spirit of the long-running ACM programming contest, the CADathlon challenges students in their CAD knowledge, and their problem solving, programming, and teamwork skills." The CADathlon was 9 hours long.
* August 2, 2011: Slobodan Matic has placed in the top four winners in Siemens' Smart Grid Innovation Contest. The final ranking will be announced on September 14. Dr. Matic's idea and white paper is "Integrated Architecture and Programming Model for Distribution and Microgrid Automation."
* January 13, 2010: National Instruments has renewed their support of CHESS.
* June 12, 2011: The US Army Research Laboratory has funded the Disciplined Design of System of Systems (DDOSOS) project. This project includes research into Multiform Models of Time, Temporal Isolation, Hybrid Models, Correct Composition, Linking Behaviors to Implementation, and Design Drivers.
* June 1, 2011: Siemens has funded Dr Patricia Derler's "T-REX: Triggered Real-Time Events on XMOS" project.
* May 25, 2011: PTARM Simulator v1.0 released. PTARM is a simulator for a Precision Timed (PT)ARM Architecture.
* April 28, 2011 Rome AFRL has funded the second phase of the Ptolemy/Rome AFRL Extensible Modeling and Analysis Framework project.
* February 16, 2011: Ninth Ptolemy Miniconference, Berkeley, CA
* February 17, 2011: As part of the Berkeley EECS Annual Research Symposium (BEARS), the Ptolemy Project held a Lunch and Poster Session in 545 Cory Hall in conjuction with the Center for Hybrid and Embedded Software Systems (CHESS) and the Donald O. Pederson Center for Electronic Systems Design.
* January, 2011: Second edition of Structure and Interpretation of Signals and Systems by UC Berkeley Professors Edward Ashford Lee and Pravin Varaiya published.
* November 27, 2010: The 2009-2010 CHESS NSF ITR Annual Report and the CHESS NSF ITR Final Report (2003-2010) are available.
* October 28, 2010: Ptolemy II 8.0.1 online demonstrations, documentation and software released.
* October 24, 2010: Ptolemy Tutorial: Exploring Models of Computation with Ptolemy II Scottsdale, AZ
* October, 2010: First edition of Introduction to Embedded Systems, A Cyber-Physical Systems Approach by UC Berkeley Professors Edward Ashford Lee and Sanjit A. Seshia published.
* September 2, 2010: Professor Edward A. Lee and Professor Sanjit A. Seshia receive an NSF Grant: "CPS: Medium: Timing-Centric Software" for work on Ptides and other projects.
* August 18 & 19, 2010: Strategic Directions in Software at Scale (SaS), Berkeley
* August 12, 2010: Cyberphysical Systems Education Workshop (CPSEW), Arlington, VA v
* July 23, 2010: ActionWebs, Berkeley
* April 12, 2010: First Workshop on Secure Control Systems (SCS), part of CPSWeek2010, Stockholm.
* February 14, 2010: The NY Times Magazine mentions the UC Berkeley International Genetically Engineered Machine competition (iGEM) Eugene/Spectacles/Kepler Software effort, sponsored in part by CHESS.
* February 11, 2010: 2009 Berkeley EECS Annual Research Symposium (BEARS). A poster session by students, faculty, and postdoctoral scholars, in the area of embedded software and systems. Areas covered include hybrid systems, control, autonomous systems, modeling and computation, as well as tools and applications in these areas. The posters are available as Publications.
* January 13, 2010: Toyota has renewed their support of CHESS. Toyota primarily supports the work of Karl Hedrick and Masayoshi Tomizuka at the Mechanical Systems Control Laboratory
* December 17, 2009: Actionwebs Kickoff Meeting, UC Berkeley.
* December 2, 2009: High-Confidence Design for Distributed Embedded Systems (HCDDES) MURI Review Meeting, UC Berkeley.
* November 5, 2009: The Multiscale Systems Center (MuSyC) held its kickoff meeting. From CHESS, Professor Alberto Sangiovanni-Vincentelli is leading the Distributed Sense and Control Systems thrust. Professor Edward A. Lee is also participating.
* November 4, 2009: Dr. Doug Densmore's Clotho project tied for "Best Software Tool" at MIT's International Genetically Engineered Machine (iGEM) competition. Clotho which uses many ideas from Platform-based design, a key idea from Professor Alberto Sangiovanni-Vincentelli. Clotho includes development of actors and automated assembly processes for the Kepler scientific workflow management system.
* October 2, 2009: The 2009 CHESS NSF ITR Annual Report is available.
* September 15, 2009: The NSF funded ActionWebs project starts. This project, lead by Professor Claire Tomlin will develop a theory of "ActionWebs", that is, "networked embedded sensor-rich systems, which are taskable for coordination of multiple decision-makers."
* July 17, 2009: UC Berkeley Professors Robert K. Brayton, Richard Rudell, Alberto Sangiovanni-Vincentelli, and Albert R. Wang will be the first recipients of the ACM/IEEE A. Richard Newton Technical Impact Award in Electronic Design Automation (EDA). The award will be presented at this year's DAC conference in San Francisco.
* May 12, 2009: The U.S. House of Representatives passed H.R. 2020, the Networking and Information Technology Research and Development Act of 2009, which, among several things, "creates a task force to explore mechanisms for carrying out collaborative R&D activities in cyber-physical systems;"
* April 15 & 16, 2009: The Eighth Biennial Ptolemy Miniconference and Tutorial
* April 7, 2008: Call for Participation in the The 1st International Workshop on Bio-Design Automation, to be held on July 27, 2008 in San Francisco as part of DAC. Dr. Doug Densmore is the general chair.
* March 30, 2008: Thales has joined CHESS. ("Center for Hybrid and Embedded Software Systems (CHESS) Overview" Powerpoint presentation)
* February 28, 2009: Professor Claire Tomlin's STARMAC Quad-rotor Aircraft was featured in the San Francisco Chronicle's article "New UC labs focus on ideas useful to society."
* February 12, 2009: 2009 Berkeley EECS Annual Research Symposium (BEARS). A poster session by students, faculty, and postdoctoral scholars, in the area of embedded software and systems. Areas covered include hybrid systems, control, autonomous systems, modeling and computation, as well as tools and applications in these areas.
* January 30, 2009: CHESS 2009 Prospectus released. This 20 page brochure covers recent and on-going CHESS activities. Printed copies are available by request from chessadmin.
* January 26, 2009: Precision Timed (PRET) Architecture Simulator 1.0 available. The goal of the PRET Project is to reintroduce timing predictability and repeatability by judiciously adopting architectural optimization techniques to deliver performance enhancements without sacrificing timing predictability and repeatability.
* December 8, 2008: Stochastic Hybrid Systems: Theory and Applications Pre-Conference Workshop, 47th IEEE Conference on Decision and Control.
* December 4, 2008: Professor Thomas A. Henzinger has been appointed the first president of IST (Institute of Science and Technology) Austria, a new research institute and graduate school near Vienna. Professor Henzinger assumes his new position on September 1, 2009.
* November 14, 2008: Professor Alberto Sangiovanni-Vincentelli has won the 2009 IEEE/RSE Wolfson James Clerk Maxwell Award with the following citation:
"For pioneering innovation and leadership in electronic design automation that have enabled the design of modern electronics systems and their industrial implementation"
Previous winners of this illustrious award are Sir Timothy Berners-Lee (2008) and Irwin M. Jacobs & Andrew J Viterbi (2007)
* November 9, 2008: Dr. Doug Densmore's Clotho project won a gold medal in MIT's International Genetically Engineered Machine (iGEM) competition. Clotho which uses many ideas from Platform-based design, a key idea from Professor Alberto Sangiovanni-Vincentelli. Clotho also won "best software tool" at iGEM.
* October 31, 2008 Clotho released by Dr. Doug Densmore's group. Clotho is a software implementation of platform-based design for synthetic biological systems.
* October 14, 2008: High-Confidence Design for Distributed Embedded Systems (HCDDES) MURI Review Meeting, UC Berkeley.
* July 31, 2008: Don Winter (Boeing) and others testify in front of the House Science and technology committee about Cyber-Physical Systems (CPS) and other topics. See the Cyber-physical Systems Page for details.
* April 21, 2008: Workshop: From Embedded Systems to Cyber-Physical Systems: a Review of the State-of-the-Art and Research Needs, St. Louis, MO.
* April 11, 2008: Jonathan Sprinkle is featured in a University of Arizona news article, "Engineering Prof Builds Brains for Robotic Cars" that discusses some of his Berkeley efforts such as the DARPA Urban Challenge.
* April 4, 2008: Ptolemy II 7.0.1 online demonstrations, documentation and software released.
* Mar 17, 2008: PTIDES is participating in the Google Summer of Code!
* Mar 7, 2008: The UC Berkeley EECS web page says:
The U.S. Army Research Laboratory has established a Collaborative Technology Alliance in the area of Micro Autonomous Systems and Technology (MAST). The Berkeley MAST team includes EECS Profs. Michel Maharbiz, Clark Nguyen, Kris Pister, Ronald Fearing, Claire Tomlin, and Shankar Sastry, and includes 8 other universities and NASA's Jet Propulsion Lab. The goal of the MAST project is to enable the autonomous operation of a collaborative ensemble of multifunctional, mobile microsystems. The MAST project, including options, provides funding of up to $89 million over the next 10 years.
* Mar 1, 2008: HSBC Bank has joined CHESS! We will be using the hsbc2 workgroup to coordinate this work.
* February 21, 2008: 2008 Berkeley EECS Annual Research Symposium (BEARS). A poster session by students, faculty, and postdoctoral scholars, in the area of embedded software and systems. Areas covered include hybrid systems, control, autonomous systems, modeling and computation, as well as tools and applications in these areas. The posters are available as Publications.
* February 7, 2008: Ptolemy II 7.0.beta online demonstrations, documentation and software released.
* January 24, 2008: We've been posting Embedded Systems Calls For Papers (CFPs).
* September 6, 2007: Joint Review Meeting of MURI Projects on High-Confidence Design for Distributed Embedded Systems with the Caltech Specification, Design and Verification of Distributed Embedded Systems MURI and the High-Confidence Design for Distributed Embedded Systems MURI was held at Berkeley. The HCDDES Presentations are available.
* August 9, 2007: Dr. George Anwar and Gabe Hoffman presented their research at National Instruments' NI Week 08. Video of their presentation is available. Dr. Anwar mentions Embedded Systems curriculum at Berkeley, CHESS, Professor Tomlin and the Ptolemy project. Gabe demos the Starmac Quadrotor helicopter. The Ptolemy project is a part of the High-Confidence Design for Distributed Embedded Systems (HCDDES) MURI that is using the Starmac as a target platform.
* July 19, 2007: Shankar Sastry has named Dean of the UC Berkeley College of Engineering.
* July 11, 2007: The EU-US Workshop on Wirelessly Networked Embedded Systems occurred on July 10, 2007 in Edinburgh. This workshop is the fourth in the series of themed EU-US workshops after Paris (2005), Washington (March 2006) and Helsinki (June 2006). The theme of the Edinburgh workshop is “Cyber-Physical Systems and Beyond".
* June 30, 2007: Professor Edward A. Lee and Professor Stephen Edwards have been awarded a three-year National Science Foundation grant titled """PRET: Precision Timed Architectures".
* June 17, 2007: The 2007 CHESS NSF ITR Annual Report is available.
* June 6, 2007: Stephen Edwards' Precision Timed (PRET) Machines talk at DAC is mentioned in the EETimes article, "Designers pitch 'wild and crazy' ideas at DAC."
* May 9, 2007: Professor Ahmad Bahai has joined the CHESS faculty
* April 11, 2007: Jonathan Sprinkle, the current CHESS Executive Director, will be starting at his faculty position at University of Arizona in July.
Christopher Brooks will be the new CHESS Executive Director.
* March 2007: Professor Alberto L. Sangiovanni-Vincentelli's article, "Quo Vadis SLD: Reasoning about Trends and Challenges of System-Level Design," is the opening paper of a special issue of the Proceedings of the IEEE.
* February 15, 2007: 2007 Berkeley EECS Annual Research Symposium (BEARS). A poster session by students, faculty, and postdoctoral scholars, in the area of embedded software and systems. Areas covered include hybrid systems, control, autonomous systems, modeling and computation, as well as tools and applications in these areas.
* February 14, 2007: Chess Winter Meeting
* February 13, 2007: The Seventh Biennial Ptolemy Miniconference and ptutorial.
* February 9, 2007: Viptos 1.0.2 source code released. Viptos is an interface between TinyOS and Ptolemy II. TinyOS is an event-driven operating system designed for sensor network nodes that have very limited resources (e.g., 8K bytes of program memory, 512 bytes of RAM). TinyOS, is used, for example, on the Berkeley MICA motes, which are small wireless sensor nodes.
* February 4, 2007: "Ptolemy II 6.0.2" online demonstrations, documentation and software released.
* January 14, 2007: Ptplot 5.6 released
* January 11, 2007: Chess faculty member Thomas Henzinger is now an ACM Fellow!
* October 30, 2006: Viptos 1.0.beta source code released.
* October 13, 2006: Metropolis 1.1.2 released.
Metropolis consists of an infrastructure, a tool set, and design methodologies for various application domains. The infrastructure provides a mechanism such that heterogeneous components of a system can be represented uniformly and tools for formal methods can be applied naturally. Metropolis-1.1.2 includes support for SystemC-2.1 and Windows.
* September 20, 2006: Claire Tomlin, UC Berkeley EECS Associate Professor and CHESS Prinicipal Investigator won a MacArthur Genius Award!
* October 4, 2006: Fall ITR Review, Alexandria, VA.
* August 23, 2006: The SUPERB Program ended on August 4, and CHESS proudly sponsored Dominique Duncan (University of Chicago), Nandita Mitra (Rutgers University), Nashlie Sephus (Mississippi State University) and Heather Taylor (University of Vermont). These students worked with individual mentors throughout the summer performing research and supporting activities in the area of hybrid and embedded systems. The mentors for the summer were Saurabh Amin, Elaine Cheong, Alex Kurzhanskiy, and Todd Templeton. Each student performed an individual project, culminating in a research paper and poster presentation.
* May 31, 2006: The 2006 Annual Report is available.
* May 8, 2006: The May 2006 IEEE Computer Magazine contains a cover feature by Edward A. Lee: "The Problem with Threads"
For concurrent programming to become mainstream, we must discard threads as a programming model. Nondeterminism should be judiciously and carefully introduced where needed, and it should be explicit in programs.
* March 15, 2006: COSI: the COmmunication Synthesis Infrastructure released. The COSI project aims at providing an infrastructure to assist designer in the difficult task of interconnecting components.
* The BEARS Chess Open House was held on Thursday, February 23, 2006, from 3:00-5:00 pm. Posters available online.
* February 21-22, 2006: CHESS researchers, Profs. Gabor Karsai, T. John Koo, Shankar Sastry and Janos Sztipanovits, were invited to give lectures at the workshop on Hybrid and Embedded Systems: Technologies and Applications. The workshop was jointly organized by the Hong Kong Science and Technology Parks Corporation and the Department of Automation and Computer-aided Engineering, the Chinese University of Hong Kong, held at the Hong Kong Science and Technology Parks. Chess researchers also participated in an Open Forum moderated by the Dr. OnChing Yue, Science Advisor of the Innovation and Technology Commission, the Government of the Hong Kong Special Administrative Region, on many important issues ranging from the scientific research to application development of Hybrid and Embedded Systems in the Greater China Region.
* The NSF Third Year ITR Site Visit was held on Monday, November 21, 2005.
Program Presentations
* October 7, 2005: HyVisual 5.0.1 released.
Hybrid systems are systems with continuous-time dynamics, discrete events, and discrete mode changes. This visual modeler supports construction of hierarchical hybrid systems. It uses a block-diagram representation of ordinary differential equations (ODEs) to define continuous dynamics. It uses a bubble-and-arc diagram representation of finite state machines to define discrete behavior.
HyVisual-5.0-alpha includes better support for combining continuous-time signals and discrete events, include multiple discrete events that occur at the same time.
* October 5, 2005: Ptolemy II 5.0.1 released.
The Ptolemy project studies modeling, simulation, and design of concurrent, real-time, embedded systems. The focus is on assembly of concurrent components. The key underlying principle in the project is the use of well-defined models of computation that govern the interactions between components. This release includes improved modeling of hybrid systems, a Dynamic Dataflow dowmain and a Heterochronous Dataflow domain.
* August 25, 2005: The Chess server is on new hardware. If you manage a Chess workgroup using CVS, you will need to change CVS servers. See the FAQ for details.
* July 21, 2005: Ptolemy II 5.0 released.
* June 6, 2005: SUPERB-IT.
The Chess Center sponsored 6 undergraduates for the summer, as part of the SUPERB-IT program at Berkeley. Meet them, and learn about their research, at the Chess SUPERB-IT website.
* May 11, 2005: Ptolemy III 5.0-beta released.
* May 11, 2005: The Chess Review was held on the UC Berkeley Campus.
Program Presentations
* March 4, 2005: HyVisual 5.0-alpha, a block-diagram editor and simulator for continuous-time and hybrid systems.
* February 18, 2005: Metropolis 1.0.2 released.
* T. John Koo, Chess alumnus and Assistant Professor at Vanderbilt University has received an NSF Faculty Early Career Development Award of $0.4M spanning April 1, 2005 - March 31, 2010 in support of his research on "Computation Platform for the Design of Hybrid Systems.".
* February 10, 2005: Open House in association with Berkeley EECS Annual Research Symposium (BEARS)
* November 18-19, 2004: The 2004 NSF Second Year ITR Site Visit was held at UC Berkeley.
Program Presentations
* October 28, 2004: Giotto 1.0.1 released.
The Giotto system is a programming methodology for embedded control systems running on possibly distributed platforms. The Giotto system consists of a time-triggered programming language, a compiler, and a runtime system. Giotto aims at hard real-time applications with periodic behavior.
* October 28, 2004: HyVisual 4.0.2, a block-diagram editor and simulator for continuous-time and hybrid systems.
* September 20, 2004: Metropolis 1.0 released.
* The Summer Undergraduate Program in Engineering Research at Berkeley - Information Technology (SUPERB-IT) took place from June 14 - August 6 2004.
Project Papers and Posters
* August 4, 2004: Ptolemy II 4.0.1 released.
* August 4, 2004: VisualSense 4.0.1 Visual editor and simulator for wireless sensor network system released.
* June 4, 2004: Metropolis 1.0-beta released to Chess website members.
* May 10, 2004: The Chess Review was held on the UC Berkeley Campus.
Program Presentations
* February 27, 2004: Open house at UC Berkeley held in association with BEARS.
* January 26, 2004: Kepler: A System for Scientific Workflows, is a cross-project collaboration to develop open source tools for Scientific Workflows and is currently based on the Ptolemy II system for heterogeneous concurrent modeling and design.
* December 3rd, 2003: NSF Onsite Review was held at UC Berkeley.
Program Presentations
* On August 21, 2003, we released Ptolemy II 3.0.2, a block diagram editor and simulator for continuous-time, hybrid and data flow systems.
* The Summer Undergraduate Program in Engineering Research at Berkeley - Information Technology (SUPERB-IT) took place from June 16-August 8 2003.
Project Papers and Posters
* Annual Report 2003 for the Foundations of Hybrid and Embedded Systems and Software, NSF/ITR Project.
* May 8, 2003: The Chess Review was held at The Claremont Hotel in Berkeley.
Program Presentations
* On February 27, 2003, Prof. Edward Lee gave an overview of Chess at the CITRIS Founding Corporate Members Meeting at UC Davis. View the video (350Kbps Real Stream). Download Real Networks player. Get the slides as Powerpoint. Get the slides as PDF.
* On January 31, 2003, we released HyVisual 2.2, a block-diagram editor and simulator for continuous-time and hybrid systems.
* On January 14, 2003, we relased CHIC, Checker for Interface Compatibility.

CHESS in the news:

* Soft Walls project gets media attention: ABC TV News (12/03) and Slashdot (1/04).
* NSF gives $13 million for UC Berkeley-led project bridging computer software and systems science
UC Berkeley Campus News, September 25, 2002.
* National Science Foundation Announces Grant Winners
New York Times, September 26, 2002. (Free Registration required)