Synchronous dataflow (SDF) is a data-driven, statically scheduled domain in Ptolemy. It is a direct implementation of the techniques given in [Lee87a] and [Lee87b]. "Data-driven" means that the availability of

`Particle`

s at the inputs of a star enables it. Stars without any inputs are always enabled. "Statically scheduled" means that the firing order of the stars is determined once, during the start-up phase. The firing order will be periodic. The SDF domain is one of the most mature in Ptolemy, having a large library of stars and demo programs. It is a simulation domain, but the model of computation is the same as that used in most of the code generation domains. A number of different schedulers, including parallel schedulers, have been developed for this model of computation.`go()`

method of a star is called a When an actor fires, it consumes some number of tokens from its input arcs, and produces some number of output tokens. In synchronous dataflow, these numbers remain constant throughout the execution of the system. It is for this reason that this model of computation is suitable for synchronous signal processing systems, but not for asynchronous systems. The fact that the firing pattern is determined statically is both a strength and a weakness of this domain. It means that long runs can be very efficient, a fact that is heavily exploited in the code generation domains. But it also means that data-dependent flow of control is not allowed. This would require dynamically changing firing patterns. The Dynamic Dataflow (DDF) and Boolean Dataflow (BDF) domains were developed to support this, as described in chapters 7 and 8, respectively.

Consider a simple connection between three stars, as shown in figure

- Vector processing in the SDF domain can be accomplished by consuming and producing multiple tokens on a single firing. For example, a star that computes a fast Fourier transform (FFT) will typically consume and produce
samples when it fires, where is some integer. Examples of vector processing stars that work this way are `FFTCx`

,`Average`

,`Burg`

, and`LevDur`

. This behavior is quite different from the matrix stars, which operate on particles where each individual particle represents a matrix. - In multirate signal processing systems, a star may consume
samples and produce , thus achieving a sampling rate conversion of . For example, the `FIR`

and`FIRCx`

stars optionally perform such a sampling rate conversion, and with an appropriate choice of filter coefficients, can interpolate between samples. Other stars that perform sample rate conversion include`UpSample`

,`DownSample`

, and`Chop`

. - Multiple signals can be merged using stars such as
`Commutator`

or a single signal can be split into subsignals at a lower sample rate using the`Distributor`

star.

This is a set of three simultaneous equations in three unknowns. The unknowns,

Suppose for example that star B in figure
5-1 is an `FFTCx`

star with its parameters set so that it will consume 128 samples and produce 128 samples. Suppose further that star A produces exactly one sample on each output, and star C consumes one sample from each input. In summary,

The smallest integer solution is

Hence, each iteration of the system includes one firing of the `FFTCx`

star and 128 firings each of stars A and B.

In this case, the balance equations have no non-zero solution. The problem with this system is that there is no sequence of firings that can be repeated indefinitely with bounded memory. If we fire A,B,C in sequence, a single token will be left over on the arc between B and C. If we repeat this sequence, two tokens will be left over. Such a system is said to be *inconsistent*, and is flagged as an error. The SDF scheduler will refuse to run it. If you must run such a system, change the domain of your graph to the DDF domain.