Document version: $Revision: 1.3 $ $Date: 2002/12/06 23:24:08 $
Chapter Descriptions. Roles of Berkeley, Middleware Groups, and Document.
Detailed description of the demo scenario, including what entities are present and how they behave. Also define high-level architecture decisions such as global/local coordinate systems, global/local time, etc.
Informal, linguistic description of the philosophies used to construct each component
Berkeley's role
Middleware groups' role
Challenge Architecture Document goals
To start the game, the motes comprising the sensor network are deployed onto the playing field in a sleep state. An external node broadcasts a begin signal to the sensor network to indicate the start of global time. The pursuers and evaders then enter the playing field and remain within the field for the duration of the game. The sensor network provides a variety of services to both pursuers and other sensor motes: time synchronization, localization, critter (moving object: pursuer or evader) estimation, etc. For the purpose of the game, the sole goal of these services is to produce estimates on the positions, velocity, and identity of critters in the playing field. This information is time stamped and routed to all pursuers in the playing field. The pursuers have onboard computation facilities comparable to a laptop computer and may optionally communicate through a separate robust channel to coordinate to capture the evader. When all evaders are captured (a capture occurs when a pursuer is "close enough" to it), the game ends. A base station is outside the playing area and provides logging and visualization services.
Pursuers
Mote Sensors
Evaders
The architecture defines a set of components which may implement algorithms and may behave as services. The architecture further defines the input and output structures and protocols accepted and emitted by components, implicit or explicit constraints and behaviors pertinent to the components, and interrelationships between components.
In this chapter, we first define just the subset of the architecture seen from the application layer: the pursuit-evasion game demo. From there, we iteratively extend that architecture with likely supporting components up-to and including top-level TinyOS components. As this document evolves, we will keep an eye toward abstraction and generality; ideally creating a refactored specification with broader application than just the game demo.
Prototypes are the essence of the architecture. Prototypes define the minimal interface provided by components. The goal is to create an architecture in which dissimilar implementations of components are interchangeable if they provide equivalent facilities, while at the same time not imposing unnecessary constraints on the underlying algorithms.
These Prototypes formally describe the API that certain classes of components and algorithms must adhere to. Concrete implementations of these prototypes provide at least the described interfaces, but may include additional interfaces specific to the algorithm at hand, such as Sensor's and Actuator's. Concrete implementations that wish to be used in the demo must fully specify themselves in the context of this document. That is, they must clearly define their abstract, formal, NesC, and graphical architectures. These concrete specifications will be wholly included in the architecture document.
Each service is implemented as a separate component. We intend a coordination component that provides scheduling of other components and management of shared resources. Each component is initialized in turn, during which it is responsible for registering itself with the coordination component. Each component registers how often it should be executed (time-triggered) and which events it should receive (messages, sensor readings, etc). The coordination component is responsible for meeting these demands of the components.
To filter data or calibrate a sensor or actuator, we intend to create components that both provide and use the interface they are filtering/calibrating. This allows us to chain any number of filters or calibrations transparently.
By default, we are not providing a resource sharing infrastructure beyond the sharing of the CPU and RF channel via the service coordination component. That is, we are assuming that in any particular configuration, no more than one component will want to use, say, the sounder. Creating a configuration in which more than one component needs access to the same resource is considered malformed. If this becomes a problem in practice, we will work to develop a resource sharing scheme. We are deferring that solution until we see conflicts arise in practice. That way, we can develop something well-suited for the problem (instead something ill-suited).
Sensors readings (input) are event driven. Processing dependent on sensor readings are also events, say for filtering data. And, it cascades all the way up; events are fired for both estimating position and initiating the broadcast of those estimates. Actuation (output) is command driven; that includes both movement and outgoing communication.
We want to abstract from byte-packed messages used for radio communication. Each component that communicates via messages to other components (either on the local mote or remote motes) operates in the context of a structure containing native types. We package all relevant information in a single structure. This reduces the need to redefine interfaces when/if we adjust only the particular data passed between components. This also results in a one-to-one correspondence between message interfaces and message structures.
This component aggregates sensor readings among a group of motes. The protocol for data measurement and aggregation is application-specific and transparent to the rest of demo code. If a target is detected, the component fires a TargetPosition event on the motes attached to the pursuers. The event passes the address of a target and a target identifier. The protocol will attempt to use the same identifier consistently to refer to the same target. This is accomplished with the help of the group management component described in Section 4. A higher-level protocol can be used to compile a list of all identified targets and their current locations.
Estimate Target depends on the PacketRouting component. Especially, it will need to use the RouteMobile interface to deliver the target information to the pursuers. The target estimation component also interacts with the location service. It needs the location information to calculate the position of the target using the positions of the detecting motes.
module PrototypeEstimateTargetM { provides { interface TargetPosition; } uses { interface ReceiveMsg; interface SendMobile; } }
interface TargetPosition { event result_t TargetPosition( location_t position; char target_id ); }
Below is a preliminary API for group management services in NEST (MIT, OSU, UVa).
The whole picture of definition is
module GroupManagementM { provides { interface StdControl; interface GroupManagementGlobal as GMGlobal; interface GroupManagementNeighbor as GMNeighbor; interface GroupManagementTracking as GMTracking; } uses { .; } }
The group management component produces a suite of services with three different functionalities, described below.
This service exports these calls:
Interface GroupManagementGlobal {
command result_t multicast(
uint8_t type,
char *msg
);
event result_t receiveMulticast(
uint8_t type,
char *msg
);
event result_t leader(
uint8_t type,
uint8_t on_off
);
}
The multicast primitive communicates a message efficiently to all destinations within the given radius configured into the service or indicated in the message header. The type parameter is used to distinguish different types of multicast services described in this document. This service has type LOCATION. The implementation transparently uses the MIT's location-dependent group formation protocol. receiveMulticast is an event raised to inform an application that a multicast message has been received. The underlying routing scheme uses a leader election protocol. The nodes that are elected leaders are notified using the leader event when they become and when cease to be leaders in this protocol. The application can ignore that event or utilize it for application-level functions that need to be performed at selected nodes in the network. Check MITs group formation documentation for more info on how leaders are elected and what properties they have.
interface GroupManagementNeighbor {
command result_t getNeighborhoodInfo();
}
The main call expored is getNeighborhoodInfo().
It returns a data structure with information regarding neighborhood health.
Berkeley reports that the "raw" connectivity information in motes network may not necessarily be suitable for routing and other reliable communication and infrastructure tasks:
create Neighborhood Maintenance Component that depending on the application gives an "improved" neighborhood information. This "improved" information preserves the cardinal features of "raw" info such as connectivity.
notice that it is not necessarily immediate neighborhood but possibly extended to 2-3 hops from a node.
OSU has a few schemes how this can be done efficiently, locally and with great degree of fault tolerance.
no, multiple reasons:
Interface Specification
The interface is:
interface GroupManagementTracking {
command result_t join(
uint8_t target_signature
);
command result_t leave(
uint8_t target_signature
);
command result_t setState(
char state
);
command result_t getState(
char state
);
event result_t leader(
uint8_t type,
uint8_t on_off
);
}
The main abstraction exported by the service is that of tracking groups. A tracking group is formed among all nodes sensing the target, as defined by a given sensory signature. The unique group name unambiguously labels each target. As the target moves, the membership of the group changes, but group identity remains the same. Hence, proximity-based groups will help identify and track different evaders. The main API is:
command result_t join( uint8_t target_signature )
The call specifies the detected target signature. The call is executed when a node senses a target of that particular signature. The call returns a group id specifying which target of that signature is currently in the proximity of the joining node as maintained by the group management service. Hence, a node's code might like something like:
if( target_signature is detected ){ target_id = join( target_signature ) tell pursuer that I see target_id at my_location. }
Observe that in the absence of tracking groups the node would not be able to immediately identify which target it is seeing (e.g., whether it is seeing the evader or one of the pursuers. Identifying the target locally is the main advantage of tracking groups. Other API calls are:
command result_t leave( target_signature )
The leave call specifies that the target can no longer be locally sensed by this node. The service also supports the calls:
event result_t leader( type, on_off ) command result_t setState( state ) command result_t getState( state )
As before, the leader call notifies the application when its node becomes or ceases to be leader, except that when type=TRACKING, the event refers to the leader of the tracking group. This leader changes as the group migrates. The invariants maintained are the group id and the fact that the leader is always within sensory horizon from the target tracked by this specific group. setState and getState is used to save and restore state that the algorithm maintains persistently across different leaders. Hence, when a node becomes leader is can getState and resume computation from where the last leader left it. The node would periodically checkpoint the computation using setState.
We break localization into four sub-systems:
This breakdown gives us modularity and interchangeability because each sub-system has its own API.
First, we describe each sub-system and a few important points about them. Then, we write down the APIs for each sub-system. Finally, we make a few concluding remarks about protocols, incremental development, non-homogenous networks, and data representation.
The sensing/actuation sub-system gives you ranging and/or angle data (with which you would later do multi-lateration and/or triangulation, respectively). This sub-system should be broken into at least two components: sensor and actuator. This allows non-homogenous networks, e.g. an infrastructure might always transmit localization beacons while the network always senses or vice versa. We can have a homogenous network by simply installing both components on every mote.
This subsystem has three top interfaces with which it
It also has a lower interface to interact with whatever actuator or sensor.
The data sub-system holds ranging/angle/location data of all important neighbors. This subsystem is not just a passive data structure, it is actually quite active. Let.s say, for instance, that my localization algorithm works best with 8 neighbors. If I have more than 8 neighbors, I need to know which neighbors to ignore (perhaps those with the noisiest ranging estimates or perhaps those with short distances). I also have to know when data becomes old and invalid, etc. Every implementation of this sub-system will have to make all of these decisions based on the type of ranging being used and the type of localization algorithm begin used.
This sub-system has a bottom interface with which it receives new data. It also has a top interface with which it gives data and a top interface with which it can be commanded to start and stop (starting and stopping here might not be well defined).
The computational level is where we do triangulation, multilateration, or whatever. This is what most people think of when they hear .localization. but it is really the easiest part to write for an embedded system like TinyOS.
This sub-system has a bottom interface with which it requests data from the data management sub-system. It also has a top interface with which it gives new location estimates and a top interface with which is can be commanded to start or stop.
This sub-system controls all the other sub-systems. This should be separate from the other systems because its functionality is completely defined by the application. For example, in a static environment we may only want to localize once in the beginning of the application and then never again. If something walks into the room we might want all nodes near the moving node to help it localize. In a completely dynamic environment we might want all nodes localizing by following some scheduling algorithm, which would be implemented here. Sometimes, we may want very frequent ranging estimates but only infrequent location estimates, etc. etc.
This sub-system has a bottom interfaces with which it controls all lower sub-systems. It also has a top interface with it is told to start and stop.
module PrototypeLocalizationActuatorM { provides { interface StdControl; interface LocalizationActuator; } uses { interface Actuator; } }
module PrototypeLocalizationSensorM { provides { interface StdControl; interface LocalizationSensor; } uses { interface Sensor; } }
module PrototypeLocalizationDataManagerM { provides { interface StdControl; interface LocalizationData; } uses { interface LocalizationSensor; } }
module PrototypeLocalizationComputationM { provides { interface StdControl; interface LocalizationCompute; interface LocalizationSensor; } uses { interface LocalizationData; } }
module PrototypeLocalizationControlM { provides { interface StdControl; } uses { interface StdControl as LocalizationSensorControl; interface StdControl as LocalizationActuatorControl; interface StdControl as LocalizationDataControl; interface StdControl as LocalizationComputationControl; interface LocalizationActuator; interface LocalizationCompute; } }
interface LocalizationActuator {
command result_t Actuate(
uint16_t actuationDestinationAddress,
uint16_t dataDestinationAddress
);
}
interface LocalizationSensor { event result_t DataSensed( localization_t newData ); }
interface LocalizationData {
command result_t GetLocalizationInfo(
uint16_t moteID
);
}
interface LocalizationCompute {
command result_t Localize();
}
typedef struct { uint16_t moteID; distance_t* distanceFromMe; angle_t* distanceFromMe; location_t* distanceStdv; } localization_t;
typedef struct {
uint16_t EstimatedXCoord; //or theta angle for spherical coords
uint16_t XCoordStdv;
uint16_t EstimatedYCoord; //or phi angle
uint16_t YCoordStdv;
uint16_t EstimatedZCoord; //or r value
uint16_t ZCoordStdv;
uint16_t CoordinateSystemID;
} location_t;
typedef struct {
uint16_t DistanceFromMe;
uint16_t DistanceStdv;
} ranging_t;
typedef struct {
uint16_t phiAngleRelativeToMe;
uint16_t phiAngleStdv;
uint16_t thetaAngleRelativeToMe;
uint16_t thetaAngleStdv;
} angle_t;
Note that these interfaces are really simple and don.t support any protocols. However, you can always wrap any sub-system in a component that gives you a more sophisticated interface to support your protocol. For example, you may want an interface that allows you to request N chirps at frequency F. This can be done with a wrapper class around your LocalizationActuator component. You might also have motes that want to ask other motes for their locations. You can do this by wrapping your LocalizationData component in a wrapper component that interprets packet commands. By not including these things in the interfaces above, we are separating the functionality from the protocol, thereby allowing us to interchange protocols.
Given the above about protocols, we do not have to make any assumption that each mote has all four sub-systems (i.e. a homogenous assumption). For example, system control might be contained in a single "leader" mote or all computational sub-system might be implemented centrally on a PC. In my particular case, for example, all I have is a sensor/actuator system that sends time of flight chirps and makes ranging estimates. I could wrap it in a wrapper component that chirps when it receives my command packet and sends me back the data in a data packet. Then, data management, computation, and chirp scheduling is done centrally in Matlab. This is good for incremental development.
Notice that we have a huge problem with data representation. If the above localization_t data structure is not sufficient for all or most localization applications, there is little hope of interchanging components. I note three main problems here: units, coordinate systems, and error terms.
Do we store every distance estimate with its units (i.e. cm or meters or hop-counts) or do we just use the convention that all distances are in centimeters. What about systems with relative distances that don.t know the units of their ranging estimates? In the above localization_t data structure I assumed we would use the convention that all distance estimates are in centimeters and all rho/theta estimates are in degrees.
Are all of our positions stored with their coordinate systems, eg. if this position is in GPS coordinates it should say so? What about relative coordinate systems? Do we need a LocalizationCoordinateSystem component to bootstrap a coordinate system? How do we identify the units of a relative coordinate system? How do we identify the identity of a relative coordinate system, i.e. when two networks that have different relative coordinate systems meet, how do we resolve them? What about networks that have two overlaid coordinate systems, i.e. it has some GPS nodes and some nodes on a relative coordinate system or room-based coordinate system. In the above localization_t data structure I assumed that we could identify each coordinate system with the ID of the leader or creator of that coordinate system. However, I have not defined a coordinateSystem component.
Quite often your ranging or angle estimations or location estimations come with error terms. How do we represent this? With a canonical probability distribution? We could assume Gaussian noise on everything and always couple every estimate with a standard deviation. Is that sufficient for everybody? In the above localization_t data structure I assumed that it is.
We propose the following tentative module for power management in NEST challenge application on Berkeley OEP. The module provides interfaces for implementing both centralized and decentralized power state control algorithms of wireless sensors.
The centralized version contains TurnOn and TurnOff interfaces for a sentry to control the power states of each individual sensor in its group. If a sentry needs to turn a particular sensor off, it broadcasts a remote_turn_off message with the sensor ID and the period in which the sensor wakes up and checks for remote_turn_on messages; If such a message is detected, that sensor stays on, otherwise, goes back off. The sentry knows when to send out remote_turn_on or new remote_turn_off messages to a sensor because it knows when that sensor wakes up to check for messages.
In the decentralized version, each non-sentry sensor decides locally on the power state transitions. This approach uses PowerMangementAlgorithm interface to control the power state transitions. The algorithm interface, based on current length of idle time, decides whether the sensor turns off or stays on, and when is the next control epoch. Before the sensor turns off, it sends out a off_notification message to notify the sentry that it is going to wake up and check for messages at the next control epoch; The sentry can then use remote_turn_on message to wake up the sensor if it needs to do that at that time.
module PowerManagement { // module Sentry; // module NonSentry; provides { interface TurnOff; interface TurnOn; } uses { interface PowerMangementAlgorithm; } }
module Sentry { provides { interface TurnOff; interface TurnOn; interface OffNotified; } uses { command remoteTurnOff; command remoteTurnOn; } }
module NonSentry { provides { interface LocalTurnOff; interface LocalTurnOn; } uses { interface OffNotifying; interface PowerManagementAlgorithm; } }
interface TurnOff { command result_t remoteTurnOff( remote_turn_off* msg ); }
interface TurnOn { command result_t remoteTurnOn( remote_turn_on* msg ); }
interface OffNotified { event result_t offNotification( off_notification* msg ); }
interface LocalTurnOff { event result_t localTurnOff( boolean remote_or_local, remote_turn_off* msg, power_state_control* action ); }
Interface LocalTurnOn { event result_t localTurnOn( boolean remote_or_local, remote_turn_on* msg, power_state-control* action ); }
interface OffNotifying { command result_t offNotifying( off_notification* msg ); }
interface PowerManagementAlgorithm { command result_t powerMangementAlgorithm( unsigned int idle_time_length, power_state_control* action ); }
typedef struct {
unsigned int moteID;
unsigned int waking_up_period;
} remote_turn_off;
typedef struct {
unsigned int moteID;
unsigned int current_time;
} remote_turn_on;
typedef struct {
unsigned int moteID;
unsigned int expected_wake_up_time;
} off_notification;
typedef struct {
boolean on_or_off;
unsigned int next_control_epoch;
} power_state_control;
We consider routing as passing a message of arbitrary size (with some upper limit defined by length type) from a source to one or a set of destinations anywhere in the entire network. There are two functions for the routing components: one is to segment data into packets at the source and reassemble at the destinations, and the other is to choose one or a set of next hops for passing the packets to their destinations.
There are various ways to specify message destinations and preferable routes. In this architecture, we present some commonly used ones, although it is subject to extensions. We intend to only make common routing interfaces, rather than common routing algorithms and component implementations. Applications should be able to switch between different routing components that provide the same set of interfaces. In addition to message sending interfaces, there is a routing interface that signals the message arrival event, which can be wired to upper-layer components. Each routing component will be required to implement this capability.
The following diagram shows how the routing components will be composed in the context of TinyOS. Routing components, e.g. ROUTE-1 or ROUTE-2, will be on top of AM_STANDARD component. AM_STANDARD demultiplexes incoming requests to the appropriate routing component. Each routing component will be used by one or more high-level application components. In the following diagram, ROUTE-1 is used by COM-11 and COM-12, and ROUTE-2 by COM-21 and COM-22.
All communication goes through AM_STANDARD, which defines the top of the shared network stack. The routing components implement the same generic interface. That is, the interface between COM-11 and ROUTE-1, the interface COM-21 and ROUTE-2 is the same regardless of the routing algorithm. This makes it easier to .wire together. components in TinyOS.
By sharing the same interface, we can easily add new components in the communication stack whenever necessary. For example, we can add the SECURITY_COM component, which can be used by more than one routing components like the following. Here, ROUTE-1 and ROUTE-2 use the SECURITY_COM whereas ROUTE-3 does not.
interface ReceiveRoutingMsg {
/**
Receive message done signal <p>
@param length The length of the message
@param msg The message just sent out
*/
event result_t receive(
uint8_t length,
char* msg
);
}
This interface provides a message arrival signal, indicating a message arrived to one of its destinations. In addition to the message itself, the length of the message is provided. All routing modules have to provide the above interface. In addition, routing modules may also provide some of the following interfaces.
interface GetRoutingMsgSource {
/**
Get the source of the last message received <p>
*/
command uint16_t get();
}
interface GetRoutingMsgHopCount {
/**
Get the hop count of the last message received <p>
*/
command uint8_t get();
}
interface GetRoutingMsgTimeStamp {
/**
Get the time stamp of last message received <p>
*/
command uint8_t get();
}
interface SendRoutingMsg { /** Send a message out to one or more remote motes <p> @param dest The destination( s ) @param length The length of the message @param msg The array of bytes need to send out @return SUCCESS if successful */ command result_t send( destination_t dest, uint8_t length, char* msg ); /** Send message done signal <p> @param msg The message just sent out @param success If the send is done successfully */ event result_t sendDone( char* msg, result_t success ); }
This is a general interface for all routing components. We have identified a number of specific routing techniques that can be implemented using this interface. Here the destination is a union of all the types used for these techniques. This is a parameterized interface where the type of destinations shall be associated with send when the actual wire takes place. The union type is defined now as:
typedef union { uint8_t hops; uint16_t address; location_t *location; CBR_t *dest; } destination_t;
where CBR_t is
typedef struct { routing_t type; constraints_t *dest; constraints_t *rout; objectives_t *objs; } CBR_t;
In the following sections, we specify separate interfaces that may be used to wrap the generic interface for specific routing techniques.
interface SendMsgByBct {
/**
Send a message out <p>
@param hops The number of hops to the source
@param length The length of the message
@param msg The array of bytes need to send out
@return SUCCESS if successful
*/
command result_t send(
uint8_t hops,
uint8_t length,
char* msg
);
/**
Send message done signal <p>
@param msg The message just sent out
@param success If the send is done successfully
*/
event result_t sendDone(
char* msg,
result_t success
);
}
This interface specifies a local broadcast function of routing, i.e., send the given message to all nodes within the number of hops.
interface SendMsgByID {
/**
Send a message out <p>
@param address The ID of a remote mote
@param length The length of the message
@param msg The array of bytes need to send out
@return SUCCESS if successful
*/
command result_t send(
uint16_t address,
uint8_t length,
char* msg
);
/**
Send message done signal <p>
@param msg The message just sent out
@param success If the send is done successfully
*/
event result_t sendDone(
char* msg,
result_t success
);
}
This is the simplest interface where node ID/address is specified for destination.
interface SendMsgByGeo { /** Send a message out to location( s )<p> @param location The pointer to a location structure representing a remote mote or motes @param length The length of the message @param msg The array of bytes need to send out @return SUCCESS if successful */ command result_t send( location_t *location, uint8_t length, char* msg ); /** Send message done signal <p> @param msg The message just sent out @param success If the send is done successfully */ event result_t sendDone( char* msg, result_t success ); }
This interface is for sending an array of bytes to one or all nodes defined by geographical locations. Locations are described by
typedef struct {
routing_t type; //one or all
int16_t xCenter;
int16_t yCenter;
uint16_t range;
} location_t;
The same interface can be used for position-based as well as direction-based locations. For direction-based routing, x and y are interpreted as directions.
interface SendMsgByCBR { /** Send a message out to a remote motes satisfying a set of constraints while choosing a route that is optimal and satisfying route constraints <p> @param type one or all @param dest destination constraints @param rout route constraints @param objs objectives @param length message length @param msg array of bytes need to send out @return SUCCESS if successful */ command result_t send( routing_t type, constraints_t *dest, constraints_t *rout, objectives_t *objs, uint8_t length, char* msg ); /** Send message done signal <p> @param msg The message just sent out @param success If the send is done successfully */ event result_t sendDone( char* msg, result_t success ); }
In this interface, destinations are specified by constraints, each of which is a range of attribute values. Attributes can be constant (ID, energy cost, etc.) or be variables (sensor readings, battery level, time, etc).
The attribute interface in TinyOS-1.x will be used. In addition to destinations, this interface can also specify route constraints and objectives, where objectives are represented as minimizing or maximizing some attribute values over the overall routing path. The types of constraints and objectives are defined as follows.
typedef struct {
uint8_t id; //attribute id
int16_t lower;
int16_t upper;
} constraint_t;
typedef struct {
uint8_t id; //attribute id
uint8_t type; //maximize or minimize
} objective_t;
typedef struct { constraint_t *cons; uint8_t num; } constraints_t;
typedef struct { objective_t *objs; uint8_t num; } objectives_t;
interface LocalLookup {
/**
lookup remote attributes by name<p>
@param address The ID of the remote mote
@param name The name of the attribute
@param result The result of the lookup
@return SUCCESS if attribute exist otherwise FAIL
*/
command result_t lookupByName(
uint16_t address,
char *name,
char *result
);
/**
lookup remote attributes by attribute ID<p>
@param address The ID of the remote mote
@param id The id of the attribute
@param result The result of the lookup
@return SUCCESS if attribute exist otherwise FAIL
*/
command result_t lookupByID(
uint16_t address,
uint8_t id,
char *result
);
}
Some routing components may be able to provide some local information for remote nodes, such as the estimated number of hops, etc. This interface provides a generic way of accessing such information, and assuming such information would be stored in the form of attributes (provided by the attribute interface in TinyOS-1.x)
Given the above set of interfaces, there will be the component implementations by various groups participating in routing. One can implement a single module that provides all the interfaces, or implement a set of modules, each of which provide a subset of interfaces. Applications should be able to switch from one set of implementations to another set of implementations. Implementations from different groups are not required to be compatible to be compiled and linked into one application, beyond the compatibility of the common interface. Here we just gave one example of the skeleton of the implementation. Note that SendRoutingMsg is a parameterized interface with type specified as:
typedef enum {
tSEND_BY_BROADCAST = 0,
tSEND_BY_ID = 1,
tSEND_BY_LOCATION = 2,
tSEND_BY_DIRECTION = 3,
tSEND_BY_CBR = 4
} routing_service_t;
module ParcRoutingM { provides { interface SendMsgByID; interface SendMsgByGeo as SendMsgByLoc; interface SendMsgByGeo as SendMsgByDir; interface SendMsgByBct; interface SendMsgByCBR; interface SendRoutingMsg[uint8_t type]; interface ReceiveRoutingMsg; interface GetRoutingMsgSource; interface LocalLookup; interface StdControl; } uses { interface Leds; interface SendMsg as SendMsgGenericComm; interface ReceiveMsg as ReceiveMsgGenericComm; interface StdControl as StdControlGenericComm; } }
Three types of functions will be supported in the TTF_Coordinator component.
Clocks are well synchronized among motes.
module Coordinator { provides { interface TTF_Coordinator; } uses { } }
interface TTF_Coordinator { /* to register a TT function with the coordinator */ command INDEX RegisterTTF( TTFRequest * ); /* to invoke a service function */ command void InvokeSvF( SvFRequest * ); /* to update t he execution schedule based on existing registered TTFs */ command void UpdateExecSchedule(); /* to dispatch next ready function in the execution engine */ command FuncType* DispatchNextFunc(); /* to retrieve a timing error report */ command TimingErrorRept * RetrieveNextErrorRept(); }
TTF_Coordinator component contains data structures which maintain timing requirements for TT functions, the execution schedule, and possible error records, and provides several APIs for manipulat ion of those data structures. Application designers will provide bodies of Time -triggered (TT) functions in application components and will register them to TTF_Coordinator along with relevant timing requirements and interrupt enabling/disabling options. TTF_Coordinator is responsible for maintaining the execution schedule by reflecting the timing requirements of registered TTFs and the SvF requests generated. The function of main() will dispatch TTFs and SvFs according to the Execution Schedule and execute them with interrupts enabled and disabled as specified. Possible error conditions, such as deadline violation, can be detected/reported either by TTF_Coordinator or by main(), and error records will be stored in TTF_Coordinator. Application components may retrieve the error records later and take appropriate actions.
/* function pointer type for all TTFs and SvFs */ typedef void( *FuncPtr )( void* );
/* an index to a registered TT function, used for future unregistration */
typedef int INDEX;
typedef int MicroSec;
/* indicating the type of execution errors, such as deadline violation */
typedef int ERR_CODE;
typedef struct { /*the time when a TT function will be ready to be scheduled */ MicroSec LoopStartTime; /* the time when a TT function will be deleted from the TT scheduler */ MicroSec LoopEndTime; /*LoopStartTime+( i-1 )*IterationInterval is the time the i-th TT function will be scheduled / executed */ MicroSec IterationInterval; /*The earliest time at which a TT function may start in each cycle */ MicroSec EST; /*The latest time at which a TT function may star */ MicroSec LST; /*LoopStartTime +( i-1 )*IterationInterval +GCT is the time by which the i-th TT function should finish its execution */ MicroSec GCT; } AAC;
typedef struct { /* function pointer to TT function */ TTFunc ptr_to_ttf; /* input parameters for TT function */ void * parameter; /* timing requirements */ AAC aac; /* indicating whether enabling/disabling certain interrupts during a TTF execution */ int interrupt_vector; } TTFRequest;
typedef struct { /* function pointer to SvF */ TTFunc ptr_to_svf; /* input parameters for SvF */ void * parameter; /* timing requirements */ MicroSec MED; /* indicating whether enabling/disabling certain interrupts during a SvF execution */ int interrupt_vector; } SvFRequest;
typedef struct { /* indicating the type of the error detected, e.g., deadline violation */ ERR_CODE err_code; /* indicating the reporter of the error, e.g., main()or timer interrupt handler ? */ int reporter_id; /* indicating the time of error detection */ MicroSec timestamp; } TimingErrorRept;
typedef struct { /* indicating whether it is a TTF or a SvF */ int func_type; /* function pointer to a TTF or a SvF */ FuncPtr ptr_to_func; /* pointer to the input parameter of a TTF or a SvF */ void * parameter; /* start time window of a TTF */ MicroSec EST, LST; /* for a TTF, it represents GCT; for a SvF it represents MED */ MicroSec GCT_or_MED; / * interrupt enabling/disabling options */ int interrupt_vector; } FuncInfo;
Prepared 17 October 2002 by Ted Herman, University of Iowa (OSU group).
The pursuer-evader scenarios for the NEST Challenge, including near-term experiments, the midterm demo, and later demonstrations, require that motes have a synchronized time-base. Time synchronization is needed for mote location determination (localization), position and velocity estimation of evaders, and real-time calculations for pursuer strategies. These several needs for time synchronization have slightly different requirements for accuracy and tolerate differing interfaces for how the time bases of distinct motes can be compared. Accordingly, we suggest an API with various time services.
Accurate localization may require time synchronization within about 10 microseconds; evader velocity estimation could be calculated to sufficient accuracy with time synchronized to within about 300 microseconds. For localization, it may be that differential calculations between pairs of motes separated by a few meters is sufficient; for the planning of multiple pursuers, a real-time calculation involving distant motes (tens of meters) may be needed.
Time synchronization requires communication; localization requires time synchronization; the speci- fication of communication services proposed by UVA (Routing, Estimation, and Group Management APIs) depends on localization. These dependencies need not be circular, since one of the communication services (SendLocal) is little more than a TOS local broadcast. Nevertheless, we prefer to use some "communication services" outside of the current set of proposed interfaces; these are documented below (they could well be implemented within the TimeSync component, but we have a hunch that they may be useful elsewhere).
Note: UVA looked at these communication services and is agreeable to considering some support for them in the routing component; we'll wait until specifications are finalized and we also have more precise ideas about implementation to nail down these communication service specifications.
module PrototypeTimeSyncM { provides { interface Time; } uses { interface bdNeighbors; interface receiveMsg as ReceiveStampedMsg; interface sendLocal as sendNeighbors; interface sendByID; } }
Perhaps not the best choice, we bundle the commands into one interface (we could change this later).
interface Time { command result_t getLocalTime( time_t* time ); command result_t getGlobalTime( time_t* time ); command result_t xlatTime( time_t* ourtime, mote_id m, time_t* othertime ); }
getLocalTime is used to obtain the high-accuracy form of time, which has a local time-base. Think of it intuitively as every mote having its own "time zone". Such time is consistent for local real-time calculations. We imagine that this command fails if the clocks have not yet been synchronized (eg, the getLocalTime command is invoked too early in the initialization phase of system startup).
xlatTime is used to convert local time to the time-base of some other mote in the near vicinity (as mentioned above, this has to do with having a beacon in common to the two motes). There are two failure modes for this command: it fails if invoked before TimeSync initialization is completed; and it fails if attempting a conversion outside of common beacon vicinity (these should be distinct failure indications). Should applications need high-accuracy conversion between arbitrary motes -- not residing in a common beacon area -- we can also imagine adding another command for such a conversion. However, this would be a more expensive call, and it would be asynchronous (only later delivering the result via an event).
getGlobalTime obtains the lower accuracy form of time, which is common (plus or minus an error tolerance) to all motes in the system. Again, this command fails if invoked too early in system startup.
Robert Szewczyk suggest that we should use 48 bit counters (at the granularity of 32 KHz, or 31.25 microsec) for planning beyond the challenge to other applications.
typedef uint48_t time_t;
We don't yet have a spec for the message format(s), frame definitions and other things internal.