Top Up Prev Next Bottom Contents Index Search

13.3 Targets


A code generation Domain is specific to the language generated, such as C (CGC), Sproc assembly code (Sproc) [Mur93], Silage [Kal93], DSP56000 assembly code (CG56), and DSP96000 assembly code (CG96). Each code generation domain has a default target which defines routines generic to the target language. A derived Target that defines architecture specific routines can then be written. A given language, particularly a generic language such as C, may run on many target architectures. Code generation functions are cleanly divided between the default domain target and the architecture specific target.

All target architectures are derived from the base class Target. The special class KnownTarget is used to add targets to the known list of targets, much as KnownBlock is used to add stars (and other blocks) to the known block list and to assign names to them.

A Target object has methods for generating a schedule, compiling the code, and running the code (which may involve downloading code to target hardware and beginning its execution). There also may be child targets (for representing multiprocessor targets) together with methods for scheduling the communication between them. Targets also have parameters that are user specified.

13.3.1 Single-processor target

The base target for all code generation domains is the CGTarget, which represents a single processor by default. This target is called default-CG in the target list for the CG domain. As the generic code generation target, the CGTarget class defines many common functions for code generation targets. Methods defined here include virtual methods to generate, display, compile, and run the code. Derived targets are free to redefine these virtual methods if necessary.

Code streams

A code generation target manages code streams which are used to store star and target generated code. The CGTarget class has the two predefined code streams: myCode and procedures. The myCode stream is referred to as CODE and the procedures stream is called PROCEDURE; these names should be used when referring to these streams as in "CodeStream* code = getStream(CODE)". Derived targets are free to add more code streams using the CGTarget method addStream(stream-name). For example, the default CGC target defines fourteen additional code streams.

Other methods, such as addProcedure(code,uniquename) can be defined, to provide a more efficient or convenient interface to a specific code stream (in this case, procedures). With addProcedure it becomes clear why unique names are necessary. Recall that addProcedure is used to declarations outside of the main body of the code. For example, say we wanted to write a function in C to multiply two numbers. The codeblock to do this could read:

codeblock(sillyMultiply) {
/* A silly function */
double $sharedSymbol(silly,mult)(double a, double b)
{
double m;
m = a*b;
return m;
}
}
Note that in this codeblock we used the sharedSymbol macro described in the code generation macros section. To add this code to the procedures stream, in the initCode method of the star, we can call either:

addProcedure(sillyMultiply,"mult"); or

addCode(sillyMultiply,"procedures","mult"); or

getStream("procedures")->put(sillyMultiply,"mult"); As with addCode, addProcedure returns a TRUE or FALSE indicating whether the code was inserted into the code stream. Taking this into account, we could have added the code line by line:

if (addProcedure("/* A silly function */\n","mult")) {
addProcedure(
"double $sharedSymbol(silly,mult)(double a, double
b)\n"
);
addProcedure("{\n");
addProcedure("\tdouble m;\n");
addProcedure("\tm = a*b;\n");
addProcedure("\treturn m;\n");
addProcedure("}\n");
}

13.3.2 Assembly code streams

Code is generated in the assembly language domains into four streams. The streams inherited from CGTarget are the CODE and PROCEDURES stream. The two new streams are:


mainLoop
Code added to this stream comprises the main loop of the generated algorithm. All addCode calls from a star's go function automatically are concatenated to this stream unless another stream is supplied as an argument.

trailer
Code added to this stream comprises the wrapup section of the generated algorithm. All addCode calls from a star's wrapup method automatically are concatenated to this stream unless another stream is supplied as an argument.

Code generation

Once the program graph is scheduled, the target generates the code in the virtual method generateCode(). (Note: code streams should be initialized before this method is called.) All the methods called by generateCode are virtual, thus allowing for target customization. The generateCode method then calls allocateMemory() which allocates the target resources. After resources are allocated, the initCode method of the stars are called by codeGenInit(). The next step is to form the main loop by calling the method mainLoopCode(). The number of iteration cycles are determined by the argument of the "run" directive which a user specifies in pigi or in ptcl. To complete the body of the main loop, go() methods of stars are called in the scheduled order. After forming the main loop, the wrapup() methods of stars are called.

Now, all of the code has been generated; however, the code can be in multiple target streams. The frameCode() method is then called to piece the code streams together and place the unified stream into the myCode stream. Finally, the code is written to a file by the method writeCode(). The default file name is "code.output", and that file will be located in the directory specified by a target parameter, destDirectory.

Finally, since all of the code has been generated for a target, we are ready to compile, load, and execute the code. Derived targets should redefine the virtual methods compileCode(), loadCode(), and runCode() to do these operations. At times it does not make sense to have separate loadCode() and runCode() methods, and in these cases, these operations should be collapsed into the runCode() method.

13.3.3 Multiprocessor targets

Targets representing multiple processors are derived from the CGTarget class. The base class for all multiple-processor targets is called MultiTarget, and resides in the $(PTOLEMY)/src/domains/cg/kernel directory. CGMultiTarget is derived from MultiTarget. CGMultiTarget class is the base class for all multiple-processor targets. It is called FullyConnected in the CG domain target list.

The design of Ptolemy is also intended to support heterogeneous multi-processor targets. In the future, the base class of all "abstract" heterogeneous multiprocessor targets will be implemented from the MultiTarget class. For such targets, certain actors must be assigned to certain targets, and the cost of a given actor is in general a function of which child target it is assigned to. We have developed parallel schedulers that address this problem [Sih91].

We have implemented, or are in the process of implementing, both "abstract" and "concrete" multi-processor targets. For example, we have classes named CGMultiTarget and CGSharedBus that represent sets of homogenous single-processor targets of arbitrary type, connected in either a fully connected or shared-bus topology, with parametrized communication costs. These targets, however, use only the CG domain stars and hence do not actually generate code (recall that CG domain stars are "comment generators"). Some other actual implementations of multiprocessor systems include the CM-5 (CGCCm5Target in the CGC domain), the Sproc multiprocessor DSP [Mur93], and the ordered transaction architecture [Sri93]. Refer to the CG56 domain documentation for CG56MultiSim target, or the CGC domain documentation for CGCMultiTarget class as examples of "concrete" multi-processor targets. In this section, we concentrate on the "abstract" multiprocessor target classes that are in the $(PTOLEMY)/src/domains/cg/targets directory.

CGMultiTarget is the base target class for all homogeneous targets. By default, it models a fully-connected multiprocessor architecture; when a processor wants to communicate with another processor, it can do immediately. The scheduleComm() method returns the time when the required communication is scheduled. In the CGMultiTarget class, it returns the same time as when the communication is required. On the other hand, CGSharedBus, which is derived from the CGMultiTarget class, is the base target class for all multiprocessor targets having a shared-bus topology. In the CGSharedBus class, the scheduleComm() method schedules the required communication on the shared-bus member object of that class, and returns the scheduled time. The communication cost (in time) is modeled by the commTime() method. Given the information on which processors are involved in this communication and how many tokens are transmitted, it returns the expected communication time once started. By default (or in fully-connected topology), it only depends on the number of tokens.

A CGMultiTarget has a sequence of child target objects to represent each of the individual processors. The number of processors are determined by an IntState, nprocs, and the type of the child target is specified by a StringState, childType. Refer to the User's Manual for details on how to specify the various target parameters. In the setup stage, the child targets are created and added to the child target list as members of the multiprocessor target. Classes derived from MultiTarget represent the topology of the multi-processor network (communication costs between processors, schedules for use of communication facilities, etc.), and single-processor child targets can represent arbitrary types of processors. The resource allocation problem is divided between the parent target, representing the shared resources, and the child targets, representing the resources that are local to each processor.

The main role of a multiprocessor target is to set up one of the chosen parallel schedulers, and to coordinate the child targets. The CGMultiTarget class has a set of parameters to select parallel scheduling options. See the schedulers section for a detailed discussion on parallel schedulers. The selected parallel scheduler schedules the program graph onto the child targets and the scheduling results are displayed on a Gantt chart. The parent multiprocessor target collects the code from each of the child targets after the child targets have generated code based on the scheduling results. By default, it merges all of the child-processor code into a single file. If separate files are required, then one approach is to create separate files with names derived from the child target names and write the code to these files in the frameCode() method of the multi-target.

Interprocessor communication (IPC) stars are created by the multiprocessor target by the methods createSend() and createReceive(). These stars are spliced in to the subgalaxies that are created and handed down to the child targets. Typically, these methods just create the appropriate IPC star and return a pointer to the object created. Each send/receive pair is matched in the pairSendReceive() method. Typically, this might involve setting pointers in the send/receive pair to point to each other.

There is no preprocessor for targets like ptlang for stars. Designing a customized multiprocessor target, therefore, is a bit complicated compared to designing a customized star. If the interconnection topology is neither fully-connected nor shared-bus, in particular, the communication scheduling should be designed in the target, which makes a target design more complicated. So the best way to design a target is to look at an already-implemented target such as CGCMultiTarget class in the CGC domain.



Top Up Prev Next Bottom Contents Index Search

Copyright © 1990-1997, University of California. All rights reserved.