NEST Challenge Architecture

Document version: $Revision: 1.3 $ $Date: 2002/12/06 23:24:08 $

1. Introduction
1.1. Chapter Descriptions
Introduction

Chapter Descriptions. Roles of Berkeley, Middleware Groups, and Document.

Demo Description

Detailed description of the demo scenario, including what entities are present and how they behave. Also define high-level architecture decisions such as global/local coordinate systems, global/local time, etc.

Architecture Methodologies

Informal, linguistic description of the philosophies used to construct each component

Estimation

Grouping

Localization

Power Management

Routing

Service Coordination

Time Synchronization

1.2. Group and Document Roles

Berkeley's role

Middleware groups' role

Challenge Architecture Document goals

2. Demo Description
2.1. Game Outline

To start the game, the motes comprising the sensor network are deployed onto the playing field in a sleep state. An external node broadcasts a begin signal to the sensor network to indicate the start of global time. The pursuers and evaders then enter the playing field and remain within the field for the duration of the game. The sensor network provides a variety of services to both pursuers and other sensor motes: time synchronization, localization, critter (moving object: pursuer or evader) estimation, etc. For the purpose of the game, the sole goal of these services is to produce estimates on the positions, velocity, and identity of critters in the playing field. This information is time stamped and routed to all pursuers in the playing field. The pursuers have onboard computation facilities comparable to a laptop computer and may optionally communicate through a separate robust channel to coordinate to capture the evader. When all evaders are captured (a capture occurs when a pursuer is "close enough" to it), the game ends. A base station is outside the playing area and provides logging and visualization services.

2.2. Demo Implementation
2.3. Functional Outline

Pursuers

  1. Initialize
  2. Listen for updates from the sensor network
  3. Communicate, coordinate with the other pursuers if necessary
  4. Actuate to capture the evader
  5. Debugging/logging output
  6. Go to Step 2 if evader has not been caught
  7. Done

Mote Sensors

  1. Initialize and calibrate
    1. Position and velocity estimates require the sensor motes to have self-localization
    2. ... and time synchronization
    3. Robust routing protocols may require initial network measurements
  2. Estimate the position and velocity of pursuers and evaders
    1. Filtering and sensor data reduction at each mote
  3. Send estimates to pursuers
  4. Debugging/logging output
  5. Go to Step 2 until "Stop"
  6. Done

Evaders

  1. Human controlled
  2. Smart mote evaders get some subset of
    1. Listens to the network traffic
    2. Knows the pursuit algorithms
    3. Knows the pursuer dynamics
  3. Smart evaders maximally exploit any data they gather
3. Architecture Methodologies

The architecture defines a set of components which may implement algorithms and may behave as services. The architecture further defines the input and output structures and protocols accepted and emitted by components, implicit or explicit constraints and behaviors pertinent to the components, and interrelationships between components.

In this chapter, we first define just the subset of the architecture seen from the application layer: the pursuit-evasion game demo. From there, we iteratively extend that architecture with likely supporting components up-to and including top-level TinyOS components. As this document evolves, we will keep an eye toward abstraction and generality; ideally creating a refactored specification with broader application than just the game demo.

3.1. Overall Methodology
3.1.1. Prototypes

Prototypes are the essence of the architecture. Prototypes define the minimal interface provided by components. The goal is to create an architecture in which dissimilar implementations of components are interchangeable if they provide equivalent facilities, while at the same time not imposing unnecessary constraints on the underlying algorithms.

These Prototypes formally describe the API that certain classes of components and algorithms must adhere to. Concrete implementations of these prototypes provide at least the described interfaces, but may include additional interfaces specific to the algorithm at hand, such as Sensor's and Actuator's. Concrete implementations that wish to be used in the demo must fully specify themselves in the context of this document. That is, they must clearly define their abstract, formal, NesC, and graphical architectures. These concrete specifications will be wholly included in the architecture document.

3.1.2. Services

Each service is implemented as a separate component. We intend a coordination component that provides scheduling of other components and management of shared resources. Each component is initialized in turn, during which it is responsible for registering itself with the coordination component. Each component registers how often it should be executed (time-triggered) and which events it should receive (messages, sensor readings, etc). The coordination component is responsible for meeting these demands of the components.

3.1.3. Filtering and Calibration Interfaces

To filter data or calibrate a sensor or actuator, we intend to create components that both provide and use the interface they are filtering/calibrating. This allows us to chain any number of filters or calibrations transparently.

3.2. Implementation Methodology
3.2.1. Resources

By default, we are not providing a resource sharing infrastructure beyond the sharing of the CPU and RF channel via the service coordination component. That is, we are assuming that in any particular configuration, no more than one component will want to use, say, the sounder. Creating a configuration in which more than one component needs access to the same resource is considered malformed. If this becomes a problem in practice, we will work to develop a resource sharing scheme. We are deferring that solution until we see conflicts arise in practice. That way, we can develop something well-suited for the problem (instead something ill-suited).

3.2.2. Input/Output Convention

Sensors readings (input) are event driven. Processing dependent on sensor readings are also events, say for filtering data. And, it cascades all the way up; events are fired for both estimating position and initiating the broadcast of those estimates. Actuation (output) is command driven; that includes both movement and outgoing communication.

3.2.3. Send/Receive Structures

We want to abstract from byte-packed messages used for radio communication. Each component that communicates via messages to other components (either on the local mote or remote motes) operates in the context of a structure containing native types. We package all relevant information in a single structure. This reduces the need to redefine interfaces when/if we adjust only the particular data passed between components. This also results in a one-to-one correspondence between message interfaces and message structures.

4. Estimation

This component aggregates sensor readings among a group of motes. The protocol for data measurement and aggregation is application-specific and transparent to the rest of demo code. If a target is detected, the component fires a TargetPosition event on the motes attached to the pursuers. The event passes the address of a target and a target identifier. The protocol will attempt to use the same identifier consistently to refer to the same target. This is accomplished with the help of the group management component described in Section 4. A higher-level protocol can be used to compile a list of all identified targets and their current locations.

Estimate Target depends on the PacketRouting component. Especially, it will need to use the RouteMobile interface to deliver the target information to the pursuers. The target estimation component also interacts with the location service. It needs the location information to calculate the position of the target using the positions of the detecting motes.

module PrototypeEstimateTargetM {
    provides {
        interface TargetPosition;
    } uses {
        interface ReceiveMsg;
        interface SendMobile;
    }
}
interface TargetPosition {
    event result_t TargetPosition(
        location_t position;
        char target_id 
    );
}
5. Grouping

Below is a preliminary API for group management services in NEST (MIT, OSU, UVa).

The whole picture of definition is

module GroupManagementM {
    provides {
        interface StdControl;
        interface GroupManagementGlobal as GMGlobal;
        interface GroupManagementNeighbor as GMNeighbor;
        interface GroupManagementTracking as GMTracking;
    } uses {
        .;
    }
}

The group management component produces a suite of services with three different functionalities, described below.

5.1. The Global Broadcast/Multicast service

This service exports these calls:

Interface GroupManagementGlobal {
    command result_t multicast(
        uint8_t type,
        char *msg 
    );
    event result_t receiveMulticast(
        uint8_t type,
        char *msg 
    );
    event result_t leader(
        uint8_t type,
        uint8_t on_off 
    );
}

The multicast primitive communicates a message efficiently to all destinations within the given radius configured into the service or indicated in the message header. The type parameter is used to distinguish different types of multicast services described in this document. This service has type LOCATION. The implementation transparently uses the MIT's location-dependent group formation protocol. receiveMulticast is an event raised to inform an application that a multicast message has been received. The underlying routing scheme uses a leader election protocol. The nodes that are elected leaders are notified using the leader event when they become and when cease to be leaders in this protocol. The application can ignore that event or utilize it for application-level functions that need to be performed at selected nodes in the network. Check MITs group formation documentation for more info on how leaders are elected and what properties they have.

5.2. Neighborhood Maitenance Component
interface GroupManagementNeighbor {
    command result_t getNeighborhoodInfo();
}

The main call expored is getNeighborhoodInfo().

It returns a data structure with information regarding neighborhood health.

5.3. Entity Tracking Service

The interface is:

interface GroupManagementTracking {
    command result_t join(
        uint8_t target_signature 
    );
    command result_t leave(
        uint8_t target_signature 
    );
    command result_t setState(
        char state 
    );
    command result_t getState(
        char state 
    );
    event result_t leader(
        uint8_t type,
        uint8_t on_off 
    );
}

The main abstraction exported by the service is that of tracking groups. A tracking group is formed among all nodes sensing the target, as defined by a given sensory signature. The unique group name unambiguously labels each target. As the target moves, the membership of the group changes, but group identity remains the same. Hence, proximity-based groups will help identify and track different evaders. The main API is:

command result_t join( uint8_t target_signature )

The call specifies the detected target signature. The call is executed when a node senses a target of that particular signature. The call returns a group id specifying which target of that signature is currently in the proximity of the joining node as maintained by the group management service. Hence, a node's code might like something like:

if( target_signature is detected ){
    target_id = join( target_signature )
    tell pursuer that I see target_id at my_location.
} 

Observe that in the absence of tracking groups the node would not be able to immediately identify which target it is seeing (e.g., whether it is seeing the evader or one of the pursuers. Identifying the target locally is the main advantage of tracking groups. Other API calls are:

command result_t leave( target_signature )

The leave call specifies that the target can no longer be locally sensed by this node. The service also supports the calls:

event result_t leader( type, on_off )
command result_t setState( state )
command result_t getState( state )

As before, the leader call notifies the application when its node becomes or ceases to be leader, except that when type=TRACKING, the event refers to the leader of the tracking group. This leader changes as the group migrates. The invariants maintained are the group id and the fact that the leader is always within sensory horizon from the target tracked by this specific group. setState and getState is used to save and restore state that the algorithm maintains persistently across different leaders. Hence, when a node becomes leader is can getState and resume computation from where the last leader left it. The node would periodically checkpoint the computation using setState.

6. Localization API document

We break localization into four sub-systems:

  1. sensing/actuation
  2. data management
  3. computation
  4. system control

This breakdown gives us modularity and interchangeability because each sub-system has its own API.

First, we describe each sub-system and a few important points about them. Then, we write down the APIs for each sub-system. Finally, we make a few concluding remarks about protocols, incremental development, non-homogenous networks, and data representation.

6.1. The Subsystems
6.1.1. Localization Sensing/Actuating

The sensing/actuation sub-system gives you ranging and/or angle data (with which you would later do multi-lateration and/or triangulation, respectively). This sub-system should be broken into at least two components: sensor and actuator. This allows non-homogenous networks, e.g. an infrastructure might always transmit localization beacons while the network always senses or vice versa. We can have a homogenous network by simply installing both components on every mote.

This subsystem has three top interfaces with which it

  1. is requested to actuate (e.g. send a localization beacon)
  2. gives new data (e.g. ranging estimates)
  3. is commanded to turn on and off

It also has a lower interface to interact with whatever actuator or sensor.

6.1.2. Localization Data Management

The data sub-system holds ranging/angle/location data of all important neighbors. This subsystem is not just a passive data structure, it is actually quite active. Let.s say, for instance, that my localization algorithm works best with 8 neighbors. If I have more than 8 neighbors, I need to know which neighbors to ignore (perhaps those with the noisiest ranging estimates or perhaps those with short distances). I also have to know when data becomes old and invalid, etc. Every implementation of this sub-system will have to make all of these decisions based on the type of ranging being used and the type of localization algorithm begin used.

This sub-system has a bottom interface with which it receives new data. It also has a top interface with which it gives data and a top interface with which it can be commanded to start and stop (starting and stopping here might not be well defined).

6.1.3. Localization Computation

The computational level is where we do triangulation, multilateration, or whatever. This is what most people think of when they hear .localization. but it is really the easiest part to write for an embedded system like TinyOS.

This sub-system has a bottom interface with which it requests data from the data management sub-system. It also has a top interface with which it gives new location estimates and a top interface with which is can be commanded to start or stop.

6.1.4. Localization System Control

This sub-system controls all the other sub-systems. This should be separate from the other systems because its functionality is completely defined by the application. For example, in a static environment we may only want to localize once in the beginning of the application and then never again. If something walks into the room we might want all nodes near the moving node to help it localize. In a completely dynamic environment we might want all nodes localizing by following some scheduling algorithm, which would be implemented here. Sometimes, we may want very frequent ranging estimates but only infrequent location estimates, etc. etc.

This sub-system has a bottom interfaces with which it controls all lower sub-systems. It also has a top interface with it is told to start and stop.

6.2. API Definitions
6.2.1. Localization Sensing/Actuating
module PrototypeLocalizationActuatorM {
    provides {
        interface StdControl;
        interface LocalizationActuator;
    } uses {
        interface Actuator;
    }
}
module PrototypeLocalizationSensorM {
    provides {
        interface StdControl;
        interface LocalizationSensor;
    } uses {
        interface Sensor;
    }
}
6.2.2. Localization Data Management
module PrototypeLocalizationDataManagerM {
    provides {
        interface StdControl;
        interface LocalizationData;
    } uses {
        interface LocalizationSensor;
    }
}
6.2.3. Localization Computation
module PrototypeLocalizationComputationM {
    provides {
        interface StdControl;
        interface LocalizationCompute;
        interface LocalizationSensor;
    } uses {
        interface LocalizationData;
    }
}
6.2.4. Localization System Control
module PrototypeLocalizationControlM {
    provides {
        interface StdControl;
    } uses {
        interface StdControl as LocalizationSensorControl;
        interface StdControl as LocalizationActuatorControl;
        interface StdControl as LocalizationDataControl;
        interface StdControl as LocalizationComputationControl;
        interface LocalizationActuator;
        interface LocalizationCompute;
    }
}
6.3. Interfaces
interface LocalizationActuator {
    command result_t Actuate(
        uint16_t actuationDestinationAddress,
        uint16_t dataDestinationAddress 
    );
}
interface LocalizationSensor {
    event result_t DataSensed(
        localization_t newData 
    );
}
interface LocalizationData {
    command result_t GetLocalizationInfo(
        uint16_t moteID 
    );
}
interface LocalizationCompute {
    command result_t Localize();
}
6.4. Data Types
typedef struct {
    uint16_t moteID;
    distance_t* distanceFromMe;
    angle_t* distanceFromMe;
    location_t* distanceStdv;
} localization_t;
typedef struct {
    uint16_t EstimatedXCoord; //or theta angle for spherical coords
    uint16_t XCoordStdv;
    uint16_t EstimatedYCoord; //or phi angle
    uint16_t YCoordStdv;
    uint16_t EstimatedZCoord; //or r value
    uint16_t ZCoordStdv;
    uint16_t CoordinateSystemID;
} location_t;
typedef struct {
    uint16_t DistanceFromMe;
    uint16_t DistanceStdv;
} ranging_t;
typedef struct {
    uint16_t phiAngleRelativeToMe;
    uint16_t phiAngleStdv;
    uint16_t thetaAngleRelativeToMe;
    uint16_t thetaAngleStdv;
} angle_t;
6.5. Important Notes and Problems
6.5.1. Protocols

Note that these interfaces are really simple and don.t support any protocols. However, you can always wrap any sub-system in a component that gives you a more sophisticated interface to support your protocol. For example, you may want an interface that allows you to request N chirps at frequency F. This can be done with a wrapper class around your LocalizationActuator component. You might also have motes that want to ask other motes for their locations. You can do this by wrapping your LocalizationData component in a wrapper component that interprets packet commands. By not including these things in the interfaces above, we are separating the functionality from the protocol, thereby allowing us to interchange protocols.

6.5.2. Incremental Development and Non-homogeneity

Given the above about protocols, we do not have to make any assumption that each mote has all four sub-systems (i.e. a homogenous assumption). For example, system control might be contained in a single "leader" mote or all computational sub-system might be implemented centrally on a PC. In my particular case, for example, all I have is a sensor/actuator system that sends time of flight chirps and makes ranging estimates. I could wrap it in a wrapper component that chirps when it receives my command packet and sends me back the data in a data packet. Then, data management, computation, and chirp scheduling is done centrally in Matlab. This is good for incremental development.

6.5.3. Data Representation Problems

Notice that we have a huge problem with data representation. If the above localization_t data structure is not sufficient for all or most localization applications, there is little hope of interchanging components. I note three main problems here: units, coordinate systems, and error terms.

6.5.3.1. Units

Do we store every distance estimate with its units (i.e. cm or meters or hop-counts) or do we just use the convention that all distances are in centimeters. What about systems with relative distances that don.t know the units of their ranging estimates? In the above localization_t data structure I assumed we would use the convention that all distance estimates are in centimeters and all rho/theta estimates are in degrees.

6.5.3.2. Coordinate Systems

Are all of our positions stored with their coordinate systems, eg. if this position is in GPS coordinates it should say so? What about relative coordinate systems? Do we need a LocalizationCoordinateSystem component to bootstrap a coordinate system? How do we identify the units of a relative coordinate system? How do we identify the identity of a relative coordinate system, i.e. when two networks that have different relative coordinate systems meet, how do we resolve them? What about networks that have two overlaid coordinate systems, i.e. it has some GPS nodes and some nodes on a relative coordinate system or room-based coordinate system. In the above localization_t data structure I assumed that we could identify each coordinate system with the ID of the leader or creator of that coordinate system. However, I have not defined a coordinateSystem component.

6.5.3.3. Error terms

Quite often your ranging or angle estimations or location estimations come with error terms. How do we represent this? With a canonical probability distribution? We could assume Gaussian noise on everything and always couple every estimate with a standard deviation. Is that sufficient for everybody? In the above localization_t data structure I assumed that it is.

7. Power management component proposed by CMU

We propose the following tentative module for power management in NEST challenge application on Berkeley OEP. The module provides interfaces for implementing both centralized and decentralized power state control algorithms of wireless sensors.

The centralized version contains TurnOn and TurnOff interfaces for a sentry to control the power states of each individual sensor in its group. If a sentry needs to turn a particular sensor off, it broadcasts a remote_turn_off message with the sensor ID and the period in which the sensor wakes up and checks for remote_turn_on messages; If such a message is detected, that sensor stays on, otherwise, goes back off. The sentry knows when to send out remote_turn_on or new remote_turn_off messages to a sensor because it knows when that sensor wakes up to check for messages.

In the decentralized version, each non-sentry sensor decides locally on the power state transitions. This approach uses PowerMangementAlgorithm interface to control the power state transitions. The algorithm interface, based on current length of idle time, decides whether the sensor turns off or stays on, and when is the next control epoch. Before the sensor turns off, it sends out a off_notification message to notify the sentry that it is going to wake up and check for messages at the next control epoch; The sentry can then use remote_turn_on message to wake up the sensor if it needs to do that at that time.

7.1. Modules
module PowerManagement {
    // module Sentry;
    // module NonSentry;
    provides {
        interface TurnOff;
        interface TurnOn;
    } uses {
        interface PowerMangementAlgorithm;
    }
}
module Sentry {
    provides {
        interface TurnOff;
        interface TurnOn;
        interface OffNotified;
    } uses {
        command remoteTurnOff;
        command remoteTurnOn;
    }
}
module NonSentry {
    provides {
        interface LocalTurnOff;
        interface LocalTurnOn;
    } uses {
        interface OffNotifying;
        interface PowerManagementAlgorithm;
    }
}
7.2. Interfaces
interface TurnOff {
    command result_t remoteTurnOff(
        remote_turn_off* msg 
    );
}
interface TurnOn {
    command result_t remoteTurnOn(
        remote_turn_on* msg 
    );
}
interface OffNotified {
    event result_t offNotification(
        off_notification* msg 
    );
}
interface LocalTurnOff {
    event result_t localTurnOff(
        boolean remote_or_local,
        remote_turn_off* msg,
        power_state_control* action 
    );
}
Interface LocalTurnOn {
    event result_t localTurnOn(
        boolean remote_or_local,
        remote_turn_on* msg,
        power_state-control* action 
    );
}
interface OffNotifying {
    command result_t offNotifying(
        off_notification* msg 
    );
}
interface PowerManagementAlgorithm {
    command result_t powerMangementAlgorithm(
        unsigned int idle_time_length,
        power_state_control* action 
    );
}
7.3. Types
typedef struct {
    unsigned int moteID;
    unsigned int waking_up_period;
} remote_turn_off;
typedef struct {
    unsigned int moteID;
    unsigned int current_time;
} remote_turn_on;
typedef struct {
    unsigned int moteID;
    unsigned int expected_wake_up_time;
} off_notification;
typedef struct {
    boolean on_or_off;
    unsigned int next_control_epoch;
} power_state_control;
8. Interface for the Routing Component of TinyOS

We consider routing as passing a message of arbitrary size (with some upper limit defined by length type) from a source to one or a set of destinations anywhere in the entire network. There are two functions for the routing components: one is to segment data into packets at the source and reassemble at the destinations, and the other is to choose one or a set of next hops for passing the packets to their destinations.

There are various ways to specify message destinations and preferable routes. In this architecture, we present some commonly used ones, although it is subject to extensions. We intend to only make common routing interfaces, rather than common routing algorithms and component implementations. Applications should be able to switch between different routing components that provide the same set of interfaces. In addition to message sending interfaces, there is a routing interface that signals the message arrival event, which can be wired to upper-layer components. Each routing component will be required to implement this capability.

The following diagram shows how the routing components will be composed in the context of TinyOS. Routing components, e.g. ROUTE-1 or ROUTE-2, will be on top of AM_STANDARD component. AM_STANDARD demultiplexes incoming requests to the appropriate routing component. Each routing component will be used by one or more high-level application components. In the following diagram, ROUTE-1 is used by COM-11 and COM-12, and ROUTE-2 by COM-21 and COM-22.

All communication goes through AM_STANDARD, which defines the top of the shared network stack. The routing components implement the same generic interface. That is, the interface between COM-11 and ROUTE-1, the interface COM-21 and ROUTE-2 is the same regardless of the routing algorithm. This makes it easier to .wire together. components in TinyOS.

By sharing the same interface, we can easily add new components in the communication stack whenever necessary. For example, we can add the SECURITY_COM component, which can be used by more than one routing components like the following. Here, ROUTE-1 and ROUTE-2 use the SECURITY_COM whereas ROUTE-3 does not.

8.1. Proposed Interfaces
8.1.1. Receiving Routing Message Interface
interface ReceiveRoutingMsg {
    /**
    Receive message done signal <p>
    @param length The length of the message
    @param msg The message just sent out
    */
    event result_t receive(
        uint8_t length,
        char* msg 
    );
}

This interface provides a message arrival signal, indicating a message arrived to one of its destinations. In addition to the message itself, the length of the message is provided. All routing modules have to provide the above interface. In addition, routing modules may also provide some of the following interfaces.

interface GetRoutingMsgSource {
    /**
    Get the source of the last message received <p>
    */
    command uint16_t get();
}
interface GetRoutingMsgHopCount {
    /**
    Get the hop count of the last message received <p>
    */
    command uint8_t get();
}
interface GetRoutingMsgTimeStamp {
    /**
    Get the time stamp of last message received <p>
    */
    command uint8_t get();
}
8.1.2. Generic Routing Sending Message Interface
interface SendRoutingMsg {
    /**
    Send a message out to one or more remote motes <p>
    @param dest The destination( s )
    @param length The length of the message
    @param msg The array of bytes need to send out
    @return SUCCESS if successful
    */
    command result_t send(
        destination_t dest,
        uint8_t length,
        char* msg 
    );
    /**
    Send message done signal <p>
    @param msg The message just sent out
    @param success If the send is done successfully
    */
    event result_t sendDone(
        char* msg,
        result_t success 
    );
}

This is a general interface for all routing components. We have identified a number of specific routing techniques that can be implemented using this interface. Here the destination is a union of all the types used for these techniques. This is a parameterized interface where the type of destinations shall be associated with send when the actual wire takes place. The union type is defined now as:

typedef union {
    uint8_t hops;
    uint16_t address;
    location_t *location;
    CBR_t *dest;
} destination_t;

where CBR_t is

typedef struct {
    routing_t type;
    constraints_t *dest;
    constraints_t *rout;
    objectives_t *objs;
} CBR_t;

In the following sections, we specify separate interfaces that may be used to wrap the generic interface for specific routing techniques.

8.1.3. Local Broadcast Interface
interface SendMsgByBct {
    /**
    Send a message out <p>
    @param hops The number of hops to the source
    @param length The length of the message
    @param msg The array of bytes need to send out
    @return SUCCESS if successful
    */
    command result_t send(
        uint8_t hops,
        uint8_t length,
        char* msg 
    );
    /**
    Send message done signal <p>
    @param msg The message just sent out
    @param success If the send is done successfully
    */
    event result_t sendDone(
        char* msg,
        result_t success 
    );
}

This interface specifies a local broadcast function of routing, i.e., send the given message to all nodes within the number of hops.

8.1.4. Send Message By ID Interface
interface SendMsgByID {
    /**
    Send a message out <p>
    @param address The ID of a remote mote
    @param length The length of the message
    @param msg The array of bytes need to send out
    @return SUCCESS if successful
    */
    command result_t send(
        uint16_t address,
        uint8_t length,
        char* msg 
    );
    /**
    Send message done signal <p>
    @param msg The message just sent out
    @param success If the send is done successfully
    */
    event result_t sendDone(
        char* msg,
        result_t success 
    );
}

This is the simplest interface where node ID/address is specified for destination.

8.1.5. Send Message By Geographical Location Interface
interface SendMsgByGeo {
    /**
    Send a message out to location( s )<p>
    @param location The pointer to a location structure representing
    a remote mote or motes
    @param length The length of the message
    @param msg The array of bytes need to send out
    @return SUCCESS if successful
    */
    command result_t send(
        location_t *location,
        uint8_t length,
        char* msg 
    );
    /**
    Send message done signal <p>
    @param msg The message just sent out
    @param success If the send is done successfully
    */
    event result_t sendDone(
        char* msg,
        result_t success 
    );
}

This interface is for sending an array of bytes to one or all nodes defined by geographical locations. Locations are described by

typedef struct {
    routing_t type; //one or all
    int16_t xCenter;
    int16_t yCenter;
    uint16_t range;
} location_t;

The same interface can be used for position-based as well as direction-based locations. For direction-based routing, x and y are interpreted as directions.

8.1.6. Send Message By Constraints Interface
interface SendMsgByCBR {
    /**
    Send a message out to a remote motes satisfying a set of constraints
    while choosing a route that is optimal and satisfying route constraints <p>
    @param type one or all
    @param dest destination constraints
    @param rout route constraints
    @param objs objectives
    @param length message length
    @param msg array of bytes need to send out
    @return SUCCESS if successful
    */
    command result_t send(
        routing_t type,
        constraints_t *dest,
        constraints_t *rout,
        objectives_t *objs,
        uint8_t length,
        char* msg 
    );
    /**
    Send message done signal <p>
    @param msg The message just sent out
    @param success If the send is done successfully
    */
    event result_t sendDone(
        char* msg,
        result_t success 
    );
}

In this interface, destinations are specified by constraints, each of which is a range of attribute values. Attributes can be constant (ID, energy cost, etc.) or be variables (sensor readings, battery level, time, etc).

The attribute interface in TinyOS-1.x will be used. In addition to destinations, this interface can also specify route constraints and objectives, where objectives are represented as minimizing or maximizing some attribute values over the overall routing path. The types of constraints and objectives are defined as follows.

typedef struct {
    uint8_t id; //attribute id
    int16_t lower;
    int16_t upper;
} constraint_t;
typedef struct {
    uint8_t id; //attribute id
    uint8_t type; //maximize or minimize
} objective_t;
typedef struct {
    constraint_t *cons;
    uint8_t num;
} constraints_t;
typedef struct {
    objective_t *objs;
    uint8_t num;
} objectives_t;
8.1.7. Local Lookup Interface
interface LocalLookup {
    /**
    lookup remote attributes by name<p>
    @param address The ID of the remote mote
    @param name The name of the attribute
    @param result The result of the lookup
    @return SUCCESS if attribute exist otherwise FAIL
    */
    command result_t lookupByName(
        uint16_t address,
        char *name,
        char *result 
    );
    /**
    lookup remote attributes by attribute ID<p>
    @param address The ID of the remote mote
    @param id The id of the attribute
    @param result The result of the lookup
    @return SUCCESS if attribute exist otherwise FAIL
    */
    command result_t lookupByID(
        uint16_t address,
        uint8_t id,
        char *result 
    );
}

Some routing components may be able to provide some local information for remote nodes, such as the estimated number of hops, etc. This interface provides a generic way of accessing such information, and assuming such information would be stored in the form of attributes (provided by the attribute interface in TinyOS-1.x)

8.2. Component Implementation and Usage of the Above Interface

Given the above set of interfaces, there will be the component implementations by various groups participating in routing. One can implement a single module that provides all the interfaces, or implement a set of modules, each of which provide a subset of interfaces. Applications should be able to switch from one set of implementations to another set of implementations. Implementations from different groups are not required to be compatible to be compiled and linked into one application, beyond the compatibility of the common interface. Here we just gave one example of the skeleton of the implementation. Note that SendRoutingMsg is a parameterized interface with type specified as:

typedef enum {
    tSEND_BY_BROADCAST = 0,
    tSEND_BY_ID = 1,
    tSEND_BY_LOCATION = 2,
    tSEND_BY_DIRECTION = 3,
    tSEND_BY_CBR = 4
} routing_service_t;
module ParcRoutingM {
    provides {
        interface SendMsgByID;
        interface SendMsgByGeo as SendMsgByLoc;
        interface SendMsgByGeo as SendMsgByDir;
        interface SendMsgByBct;
        interface SendMsgByCBR;
        interface SendRoutingMsg[uint8_t type];
        interface ReceiveRoutingMsg;
        interface GetRoutingMsgSource;
        interface LocalLookup;
        interface StdControl;
    } uses {
        interface Leds;
        interface SendMsg as SendMsgGenericComm;
        interface ReceiveMsg as ReceiveMsgGenericComm;
        interface StdControl as StdControlGenericComm;
    }
}
9. Proposal for Time-Triggered Function (TTF) Support in Coordinator Component in TinyOS (U.C.Irvine)
9.1. Requirement

Three types of functions will be supported in the TTF_Coordinator component.

  1. Time-triggered function (TTF): A TTF can be described as, "From GlobalTime = T1 to T2, do a task TTF every P time-units (iteration-interval) by the GCT of D" where GCT denotes guaranteed completion time.
  2. Service function (SvF): A SvF should be started at the earliest convenient time (when a TTF initiated earlier is not in execution) and completed within the maximum execution duration (MED) of D after the invocation (i.e., one -way non-blocking request for execution of the SvF) occurs. The notion of a SvF is similar to that of a task in the current TinyOS if the MED for a task can be specified and the task can be invoked from other components.
  3. Conventional utility function to which a blocking-call can be made from within a TTF or a SvF.
9.2. Assumption

Clocks are well synchronized among motes.

9.3. Interface
module Coordinator {
    provides {
        interface TTF_Coordinator;
    } uses {
    }
}
interface TTF_Coordinator {
    /* to register a TT function with the coordinator */
    command INDEX RegisterTTF(
        TTFRequest * 
    );
    /* to invoke a service function */
    command void InvokeSvF(
        SvFRequest * 
    );
    /* to update t he execution schedule based on existing registered TTFs */
    command void UpdateExecSchedule();
    /* to dispatch next ready function in the execution engine */
    command FuncType* DispatchNextFunc();
    /* to retrieve a timing error report */
    command TimingErrorRept * RetrieveNextErrorRept();
}

TTF_Coordinator component contains data structures which maintain timing requirements for TT functions, the execution schedule, and possible error records, and provides several APIs for manipulat ion of those data structures. Application designers will provide bodies of Time -triggered (TT) functions in application components and will register them to TTF_Coordinator along with relevant timing requirements and interrupt enabling/disabling options. TTF_Coordinator is responsible for maintaining the execution schedule by reflecting the timing requirements of registered TTFs and the SvF requests generated. The function of main() will dispatch TTFs and SvFs according to the Execution Schedule and execute them with interrupts enabled and disabled as specified. Possible error conditions, such as deadline violation, can be detected/reported either by TTF_Coordinator or by main(), and error records will be stored in TTF_Coordinator. Application components may retrieve the error records later and take appropriate actions.

9.4. Appendix: Basic Data and Function Types
/* function pointer type for all TTFs and SvFs */
typedef void( *FuncPtr )( void* );
/* an index to a registered TT function, used for future unregistration */
typedef int INDEX;
typedef int MicroSec;
/* indicating the type of execution errors, such as deadline violation */
typedef int ERR_CODE;
typedef struct {
    /*the time when a TT function will be ready to be scheduled */
    MicroSec LoopStartTime;
    /* the time when a TT function will be deleted from the TT scheduler */
    MicroSec LoopEndTime;
    /*LoopStartTime+( i-1 )*IterationInterval is the time the i-th TT function will be scheduled /
    executed */
    MicroSec IterationInterval;
    /*The earliest time at which a TT function may start in each cycle */
    MicroSec EST;
    /*The latest time at which a TT function may star */
    MicroSec LST;
    /*LoopStartTime +( i-1 )*IterationInterval +GCT is the time by which the i-th TT function
    should finish its execution */
    MicroSec GCT;
} AAC;
typedef struct {
    /* function pointer to TT function */
    TTFunc ptr_to_ttf;
    /* input parameters for TT function */
    void * parameter;
    /* timing requirements */
    AAC aac;
    /* indicating whether enabling/disabling certain interrupts during a TTF execution */
    int interrupt_vector;
} TTFRequest;
typedef struct {
    /* function pointer to SvF */
    TTFunc ptr_to_svf;
    /* input parameters for SvF */
    void * parameter;
    /* timing requirements */
    MicroSec MED;
    /* indicating whether enabling/disabling certain interrupts during a SvF execution */
    int interrupt_vector;
} SvFRequest;
typedef struct {
    /* indicating the type of the error detected, e.g., deadline violation */
    ERR_CODE err_code;
    /* indicating the reporter of the error, e.g., main()or timer interrupt handler ? */
    int reporter_id;
    /* indicating the time of error detection */
    MicroSec timestamp;
} TimingErrorRept;
typedef struct {
    /* indicating whether it is a TTF or a SvF */
    int func_type;
    /* function pointer to a TTF or a SvF */
    FuncPtr ptr_to_func;
    /* pointer to the input parameter of a TTF or a SvF */
    void * parameter;
    /* start time window of a TTF */
    MicroSec EST,
    LST;
    /* for a TTF, it represents GCT; for a SvF it represents MED */
    MicroSec GCT_or_MED;
    / * interrupt enabling/disabling options */
    int interrupt_vector;
} FuncInfo;
10. TimeSync

Prepared 17 October 2002 by Ted Herman, University of Iowa (OSU group).

The pursuer-evader scenarios for the NEST Challenge, including near-term experiments, the midterm demo, and later demonstrations, require that motes have a synchronized time-base. Time synchronization is needed for mote location determination (localization), position and velocity estimation of evaders, and real-time calculations for pursuer strategies. These several needs for time synchronization have slightly different requirements for accuracy and tolerate differing interfaces for how the time bases of distinct motes can be compared. Accordingly, we suggest an API with various time services.

Accurate localization may require time synchronization within about 10 microseconds; evader velocity estimation could be calculated to sufficient accuracy with time synchronized to within about 300 microseconds. For localization, it may be that differential calculations between pairs of motes separated by a few meters is sufficient; for the planning of multiple pursuers, a real-time calculation involving distant motes (tens of meters) may be needed.

Time synchronization requires communication; localization requires time synchronization; the speci- fication of communication services proposed by UVA (Routing, Estimation, and Group Management APIs) depends on localization. These dependencies need not be circular, since one of the communication services (SendLocal) is little more than a TOS local broadcast. Nevertheless, we prefer to use some "communication services" outside of the current set of proposed interfaces; these are documented below (they could well be implemented within the TimeSync component, but we have a hunch that they may be useful elsewhere).

Note: UVA looked at these communication services and is agreeable to considering some support for them in the routing component; we'll wait until specifications are finalized and we also have more precise ideas about implementation to nail down these communication service specifications.

10.1. NesC Prototype for TimeSync
module PrototypeTimeSyncM {
    provides {
        interface Time;
    } uses {
        interface bdNeighbors;
        interface receiveMsg as ReceiveStampedMsg;
        interface sendLocal as sendNeighbors;
        interface sendByID;
    }
}
10.1.1. Interface: Time

Perhaps not the best choice, we bundle the commands into one interface (we could change this later).

interface Time {
    command result_t getLocalTime(
        time_t* time 
    );
    command result_t getGlobalTime(
        time_t* time 
    );
    command result_t xlatTime(
        time_t* ourtime,
        mote_id m,
        time_t* othertime 
    );
}

getLocalTime is used to obtain the high-accuracy form of time, which has a local time-base. Think of it intuitively as every mote having its own "time zone". Such time is consistent for local real-time calculations. We imagine that this command fails if the clocks have not yet been synchronized (eg, the getLocalTime command is invoked too early in the initialization phase of system startup).

xlatTime is used to convert local time to the time-base of some other mote in the near vicinity (as mentioned above, this has to do with having a beacon in common to the two motes). There are two failure modes for this command: it fails if invoked before TimeSync initialization is completed; and it fails if attempting a conversion outside of common beacon vicinity (these should be distinct failure indications). Should applications need high-accuracy conversion between arbitrary motes -- not residing in a common beacon area -- we can also imagine adding another command for such a conversion. However, this would be a more expensive call, and it would be asynchronous (only later delivering the result via an event).

getGlobalTime obtains the lower accuracy form of time, which is common (plus or minus an error tolerance) to all motes in the system. Again, this command fails if invoked too early in system startup.

Robert Szewczyk suggest that we should use 48 bit counters (at the granularity of 32 KHz, or 31.25 microsec) for planning beyond the challenge to other applications.

typedef uint48_t time_t;

We don't yet have a spec for the message format(s), frame definitions and other things internal.

10.1.2. Notes on Interfaces Used
  • The command bdNeighbors returns a pointer to a list of up to k mote identifiers, where k is the assumed upper bound on a "neighborhood" size (more about this shortly). Each item in the list is the identifier of a mote to which the invoker has a direct, bidirectional link (that is, if i invokes and j is in the resulting list, then i and j can both use sendLocal to transmit to each other).
  • The event receiveMsg, documented in the routing component description, carries a pointer to a message with "TimeSync" as its type. Presumably, the routing component also supplies a "register" command interface that the TimeSync component invokes as part of initialization (so that the routing component has knowledge of the TimeSync message type -- or was it intended to resolve this at compile/link time?).
  • The command sendLocal is documented in the routing component.
  • The command sendByID is needed so that motes in the local neighborhood of a beacon (a concept internal to the TimeSync component) can address each other, in a message send primitive, by identifier -- in view of the basic need for TimeSync before locations are determined, we need some such primitive. For the type of synchronization algorithm developed by UCLA, this sendByID only transmits messages that travel two hops: from a mote, to a beacon, and then to another mote within the beacon's neighborhood.