In the SDF domain, an iteration is well-defined. It is the minimum number of firings that brings the buffers back to their original state. In SDF, this can be found by a compile-time scheduler by solving the balance equations. In both BDF and DDF, it turns out that it is undecidable whether such a sequence of firings exists. This means that no algorithm can answer the question for all graphs of a given size in finite time. This explains, in part, why the BDF domain may fail to construct a compile-time schedule and fall back on the DDF schedulers.
We have three simple and obvious criteria that a DDF scheduler should satisfy:
The reason that these criteria are hard to satisfy is fundamental. We have already pointed out that it is undecidable whether a sequence of firings exists that will return the graph to its original state. This fact can be used to show that it is undecidable whether a graph can be executed in bounded memory. Thus, no finite analysis can always guarantee (b). The trick is that the DDF scheduler in fact has infinite time to run an infinite execution, so, remarkably, it is still possible to guarantee condition (b). The new DDF schedulers do this.
Regarding condition (a), it is also undecidable whether a graph can be executed forever. This question is equivalent to the halting problem, and the DDF model of computation is sufficiently rich that the halting problem cannot always be solved in finite time. Again, we are fortunate that the scheduler has infinite time to carry out an infinite execution. This is really what we mean by dynamic scheduling!
Condition (c) is more subtle and centers around the desire for determinate execution. What we mean by this, intuitively, is that a user should be able to tell immediately what stars will fire in one iteration, knowing the state of the graph. In other words, which stars fire should not depend on arbitrary decisions made by the scheduler, like the order in which it examines the stars.
To illustrate that this is a major issue, suppose we naively define an iteration to consist of "firing all enabled stars at most once." Consider the simple example in figure
We have implemented two policies in DDF. These are explained below.
DDFSimpleSched
, first scans all stars and determines which are enabled. In a second pass, it then fires the enabled stars. Thus, the order in which the stars fire has no impact on which ones fire in a given iteration.
Unfortunately, as stated, this simple policy still does not work. Suppose that star A in figure
7-2 produces two particles each time it fires, and actor B consumes 1. Then our policy will be to fire actor A in the first iteration and both A and B in all subsequent iterations. This violates criterion (b), because it will not execute in bounded memory. More importantly, it is counterintuitive. Thus, the DDFSimpleSched
class implements a more elaborate algorithm.
This larger notion of an iteration can be specified using the target pragma mechanism to identify particular stars that must fire some specific number of times (greater than or equal to one) in each user iteration. To use this, make sure the domain is DDF and the target is DDF-default
. Then in pigi, place the mouse over the icon of the star in question, and issue the edit-pragmas command ("a"). One pragma (the one understood by this target) will appear; it is called firingsPerIteration. Set it to the desired value. This will then define what makes up an iteration.
DDFClustSched
. WARNING
: As of this writing, this scheduler will not work with wormholes, and will issue a warning. Nonetheless, it is an interesting scheduler for two reasons, the first of which is its clustering behavior. The second is that it uses a different definition of a basic iteration. In this definition, a basic iteration (loosely) consists of as many firings as possible subject to the constraint that no actor fires more than once and that deferrable actors are avoided if possible. The complete algorithm is given in figure
The second group consists of the DDF-specific stars where the number of particles required on the input ports is unspecified. An example is the EndCase
star (a multi-input version of the BDF Select
star). The EndCase
star has one control input and one multiport input for data. The control input value specifies which data input port requires a particle. Stars in this group must specify at run time how many input particles they require on each input port. Stars specify a port with a call to a method called waitPort and the number of particles needed with a call to waitNum. To determine whether a star is runnable, the scheduler checks whether a specified input port has the specified number of particles.
For example, in the EndCase
star, the waitPort points to the control input port at the beginning. If the control input has enough data (one particle), the star is fired. When it is fired, it checks the value of the particle in the control port, and changes the waitPort pointer to the input port on which it needs the next particle. The star will be fired again when it has enough data on the input port pointed by waitPort. This time, it collects the input particle and sends it to the output port. See Figure
7-5.
The third group of stars comprises sources. Sources are always runnable. Source stars introduce a significant complication into the DDF domain. In particular, since they are always runnable, it is difficult to ensure that they are not invoked too often. This scheduler has a reasonable but not foolproof policy for dealing with this. Recall that the DDF domain is a superset of the SDF domain. The definition of one iteration for this scheduler tries to obtain the same results as the SDF scheduler when only SDF stars are used. In the SDF domain, the number of firings of each source star, relative to other stars, is determined by solving the balance equations. However, in the DDF domain, the balance equations do not apply in the same form1. The technique we use instead is lazy-evaluation.
We can now define what is meant by one iteration in DDF. An iteration consists of one firing of each source star, followed by as many lazy-evaluation passes as possible, until the system deadlocks. One way to view this (loosely) is that enough stars are fired to consume all of the data produced in the first pass, where the source stars were each fired once. This may involve repeatedly firing some of the source stars. However, a lazy-evaluation is only initiated if a star in need of inputs already has at least one input with enough tokens to fire. Because of this, in some circumstances, the firings that make up an iteration may not be exactly what is expected. In particular, when there is more than one sink star in the system, and the sink stars fire at different rates, the ones firing at higher rates may not be fired as many times as expected. It is also possible for one iteration to never terminate.
When a DDF wormhole is invoked, it will execute one iteration of the DDF system contained in it. This is a serious problem in many applications, since the user may need more control over what constitutes one firing of the wormhole.
Copyright © 1990-1997, University of California. All rights reserved.