In the SDF domain, an iteration is well-defined. It is the minimum number of firings that brings the buffers back to their original state. In SDF, this can be found by a compile-time scheduler by solving the balance equations. In both BDF and DDF, it turns out that it is undecidable whether such a sequence of firings exists. This means that no algorithm can answer the question for all graphs of a given size in finite time. This explains, in part, why the BDF domain may fail to construct a compile-time schedule and fall back on the DDF schedulers.
We have three simple and obvious criteria that a DDF scheduler should satisfy:
The reason that these criteria are hard to satisfy is fundamental. We have already pointed out that it is undecidable whether a sequence of firings exists that will return the graph to its original state. This fact can be used to show that it is undecidable whether a graph can be executed in bounded memory. Thus, no finite analysis can always guarantee (b). The trick is that the DDF scheduler in fact has infinite time to run an infinite execution, so, remarkably, it is still possible to guarantee condition (b). The new DDF schedulers do this.
Regarding condition (a), it is also undecidable whether a graph can be executed forever. This question is equivalent to the halting problem, and the DDF model of computation is sufficiently rich that the halting problem cannot always be solved in finite time. Again, we are fortunate that the scheduler has infinite time to carry out an infinite execution. This is really what we mean by dynamic scheduling!
Condition (c) is more subtle and centers around the desire for determinate execution. What we mean by this, intuitively, is that a user should be able to tell immediately what stars will fire in one iteration, knowing the state of the graph. In other words, which stars fire should not depend on arbitrary decisions made by the scheduler, like the order in which it examines the stars.
To illustrate that this is a major issue, suppose we naively define an iteration to consist of "firing all enabled stars at most once." Consider the simple example in figure
We have implemented two policies in DDF. These are explained below.
7.2.1 The default scheduler
The default scheduler, realized in the class DDFSimpleSched
, first scans all stars and determines which are enabled. In a second pass, it then fires the enabled stars. Thus, the order in which the stars fire has no impact on which ones fire in a given iteration.DDFSimpleSched
class implements a more elaborate algorithm.
One iteration, by default, consists of firing all enabled and non-deferrable stars once. If no stars fire, then one deferrable star is carefully chosen to be fired. A deferrable star is one with any output arc (except a self-loop) that has enough data to satisfy the destination actor. In other words providing more data on that output arc will not help the downstream actor become enabled; it either already has enough data, or it is waiting for data on another arc. If a deferrable star is fired, it will be the one that has the smallest maximum output buffer sizes. The algorithm is formally given in figure
This default iteration is defined to fire actors at most once. Sometimes, a user needs several such basic iterations to be treated as a single iteration. For example, a user may wish for a user iteration to include one firing of an
XMgraph
star, so that each iteration results in one point plotted. The basic iteration may not include one such firing. Another more critical example is a wormhole that contains a DDF system but will be embedded in an SDF system. In this case, it is necessary to ensure that one user iteration consists of enough firings to produce the expected number of output particles.DDF-default
. Then in pigi, place the mouse over the icon of the star in question, and issue the edit-pragmas command ("a"). One pragma (the one understood by this target) will appear; it is called firingsPerIteration. Set it to the desired value. This will then define what makes up an iteration.7.2.2 The clustering scheduler
If you set the target parameter restructure to YES, you will get a scheduler that clusters SDF actors when possible and invokes the SDF scheduler on them. The scheduler is implemented in the class DDFClustSched
. WARNING
: As of this writing, this scheduler will not work with wormholes, and will issue a warning. Nonetheless, it is an interesting scheduler for two reasons, the first of which is its clustering behavior. The second is that it uses a different definition of a basic iteration. In this definition, a basic iteration (loosely) consists of as many firings as possible subject to the constraint that no actor fires more than once and that deferrable actors are avoided if possible. The complete algorithm is given in figure
7.2.3 The fast scheduler
In case the new definition of an iteration is inconvenient for legacy systems, we preserve an older and faster scheduler that is not guaranteed to satisfy criteria (b) and (c) above. The basic operation of the fast scheduler is to repeatedly scan the list of stars in the domain and execute the runnable stars until no more stars are runnable, with certain constraints imposed on the execution of sources. For the purpose of determining whether a star is runnable, the stars are divided into three groups. The first group of the stars have input ports that consume a fixed number of particles. All SDF stars, except those with no input ports, are included in this group. For this group, the scheduler simply checks all inputs to determine whether the star is runnable.EndCase
star (a multi-input version of the BDF Select
star). The EndCase
star has one control input and one multiport input for data. The control input value specifies which data input port requires a particle. Stars in this group must specify at run time how many input particles they require on each input port. Stars specify a port with a call to a method called waitPort and the number of particles needed with a call to waitNum. To determine whether a star is runnable, the scheduler checks whether a specified input port has the specified number of particles.EndCase
star, the waitPort points to the control input port at the beginning. If the control input has enough data (one particle), the star is fired. When it is fired, it checks the value of the particle in the control port, and changes the waitPort pointer to the input port on which it needs the next particle. The star will be fired again when it has enough data on the input port pointed by waitPort. This time, it collects the input particle and sends it to the output port. See Figure
7-5.
The third group of stars comprises sources. Sources are always runnable. Source stars introduce a significant complication into the DDF domain. In particular, since they are always runnable, it is difficult to ensure that they are not invoked too often. This scheduler has a reasonable but not foolproof policy for dealing with this. Recall that the DDF domain is a superset of the SDF domain. The definition of one iteration for this scheduler tries to obtain the same results as the SDF scheduler when only SDF stars are used. In the SDF domain, the number of firings of each source star, relative to other stars, is determined by solving the balance equations. However, in the DDF domain, the balance equations do not apply in the same form1. The technique we use instead is lazy-evaluation.
Lazy evaluation
At the beginning of each iteration of a DDF application, we fire all source stars exactly once, and temporarily declare them "not runnable." We also fire all stars that have enough initial tokens on their inputs. After that, the scheduler starts scanning the list of stars in the domain. If a star has some particles on some input arcs, but is not runnable yet, then the star initiates the (lazy) evaluation of those stars that are connected to the input ports requiring more data. This evaluation is "lazy" because it occurs only if the data it produces are actually needed. The lazy-evaluation technique ensures that the relative number of firings of source stars is the same under the DDF scheduler as it would be under the SDF scheduler.
Copyright © 1990-1997, University of California. All rights reserved.