Testing Ptolemy II

This page is primarily for Ptolemy II Developers. Some of the commands mentioned below are not included in the Ptolemy II distribution.

Contents:

Test Suite

We have included regression tests for most of the Ptolemy II code. Usually, wherever there is Java file, the tests are in the test directory.

Running The Tests

There are three types of tests:
  1. Unit tests that are mostly written in Tcl, and use Jacl which is a 100% Java implementation of a subset of Tcl. These tests appear in the test/ directories as *.tcl files
  2. System tests that are Ptolemy models. These tests appear in the test/auto/ directories as *.xml files .
  3. JUnit tests that can invoke the Tcl and auto tests above. These tests appear in the test/junit directories.
To run the tcl and model tests in one directory:
          cd $PTII/ptolemy/actor/lib/test
          make tests
        
To run the tcl and model tests using JUnit from the $PTII directory:
          cd $PTII
          ant test.single -Dtest.name=ptolemy.actor.lib.test.junit.JUnitTclTest -Djunit.formatter=plain
        
To get usage for the test.single rule, try ant test.single and look at the first few lines of output.
To run the tcl and model tests using JUnit from a test/ directory
          cd $PTII/ptolemy/actor/lib/test
          $PTII/bin/ptjunit
        
To run all the tests, use ant.
Ant has various targets, run ant -p to list the targets and see the test* targets.
ant test
Runs all the tests, including the long and longest tests (see below)
ant test.longest
Runs the longests tests, which include exporting all the demos to html, which takes about an hour. This test is run by the nightly build
ant test.short
Run only the short tests. This is a good way to quickly test changes.
To run the tcl and model tests using JUnit from Eclipse
Currently, this does not work because some of the tests must be run from the test/directory and ant under Eclipse runs from $PTII.

How To Use the Tests During Development

A good best practice is to run the tests in the directory in which you are working and then run the tests from the top level if the change may affect other parts of the tree.

Unfortunately, it is likely that there will be some test failures in the development tree, so the workaround is to run the tests either before you make your changes or in a clean tree.

  1. In either a clean development tree or before making significant changes, run the tests at the top level and save the output:
              cd $PTII
              ant test.short >& before.txt
            
  2. List the failed tests and save the output
              egrep "\] Failed: [1-9]" before.txt > beforeFailed.txt
            
  3. Review the failed tests. Hopefully, there should be very few failed tests.
  4. Make your changes.
  5. Run the tests in the directory in which you are working:
              cd test
              make
            
  6. When you feel your changes are ready to be checked in, run the tests at the top level and save the output in a different file
              cd $PTII
              ant test.short >& after.txt
            
  7. List the failed tests and save the output
              egrep "\] Failed: [1-9]" after.txt > afterFailed.txt
            
  8. Use diff to compare the before and after failed tests:
              diff beforeFailed.txt afterFailed.txt
            
  9. Fix test failures as necessary and repeat running the tests until there are no new test failures.

Writing Your Own Tests

There are two ways to write tests:

  1. Using Vergil to write tests using the Test actor
  2. Write tests using Tcl

Using Vergil to write tests

The testing infrastructure will automatically run any MoML models located in test/auto directories. (Nowhere do the names of these MoML files need to be listed in order for them to be run.)

However, said infrastructure has to be re-built in each new directory containing tests.

Note that MoML models used for testing should follow the following conventions:

To create the infrastructure for a new test directory, use $PTII/adm/bin/mkpttest dirname, which does the following

When you have done all this, the tests in your new test/auto directory ought to run in the nightly build.

The test passes if it does not throw an exception

The Test actor (located under "more libraries") can be used to compare the first few results of a simulation with a known good results. If the comparison fails, then the test fails.

If a test is in the optional test/auto/knownFailedTests directory, then it will be marked as a known failure if it fails. (For more information, see Checking Known Failed Test Results below).

Platform specific tests may be put in to directories like auto/macosx-x86_64/ or auto/linux-amd64/. The name of the directory is based on the value of the os.name Java property with the spaces removed and the results converted to lower case followed by a dash, followed by the value of the os.arch Java property.

The auto32/ directory contains 32-bit tests that should run on any 32-bit platform.

The auto/nonTerminatingTests/ directory contains tests that are not expected to terminate.

Using Vergil to write tests is quite a bit easier than writing Tcl code, but it is much more difficult to handle corner cases and test for erroneous conditions by writing models. Tcl tests are unit tests, whereas tests that use models are system tests and may mask unit test bugs.

Write tests using Tcl

The test suite infrastructure is based on the Tcl test suite code.

Tcl Resources:

We ship Jacl is a Java implementation of Tcl. The Ptolemy II test suite uses Jacl so that we have access to Java objects. Jacl may be found in $PTII/lib/ptjacl.jar.

make tests will run the tests in the current directory and any subdirectories.

The file $PTII/util/testsuite/testDefs.tcl defines the Tcl proc test.

test takes five arguments:

  1. The name of the test, for example: foo-1.0
    The name of the test should strictly follow the format below. The Tcl tests that come with the Tcl distribution follow a similar format, so unless there is a strong need to not follow the format, please stick with what works.
  2. The test description, usually a single sentence.
  3. The contents of the test, usually Tcl code that does the action to be tested. The last line of the contents should return a value.
  4. The results to be compared against.
  5. The last argument is optional and determines what sort of test is being run. The default value is NORMAL, which means that the test should pass under normal conditions. If the value is KNOWN_FAILED, then the test is expected to fail, but eventually will be fixed. By using KNOWN_FAILED, developers can mark tests that they know are failing, which will save other developers from attempting to debug known problems.
Below is a sample piece of code that sources the testDefs.tcl file and then runs one test. The code below has the incorrect value return results to be compared against, so the test suite properly indicates that the test failed.
        if {[string compare test [info procs test]] == 1} then {
            source [file join $PTII util testsuite testDefs.tcl]
        } {}

        test testExample-1.1 {This is the first test example, it does very little} {
            catch {this is an error} errMsg1
            set a "this is the value of a"
            list $errMsg1 $a
        } {{invalid command name "this"} {this is NOT the value of a}}
    
Parts of a Tcl test file

Tcl Test files should be located in the test directory.

It is better to have many small test files as opposed to a few large test files so that other developers can quickly find the tests for the class they are working with. Usually tests for the class Foo are found in the file test/Foo.tcl

Each test file should have the following parts:

  1. The Copyright
  2. The code that loads the test system package
              if {[string compare test [info procs test]] == 1} then {
                  source testDefs.tcl
              }
            
    Each directory contains a testDefs.tcl file which in turn sources $PTII/util/testsuite/testDefs.tcl. The idea here is that if the test framework changes, each test file need not be updated.
  3. A line that the user can uncomment if they want the test system to produce verbose messages:
              #set VERBOSE 1
            
  4. The individual tests, which should loosely follow the Ptolemy II file format standard:
              ############################################################################
              #### Foo
              test Foo-1.1 {Test out Foo} {
    
              } {}
            
Tcl Test Styles
There are two types of tests:
  1. Tests that handle all necessary setup in each individual test.
  2. Tests that rely on the earlier tests to do setup.
In general, each test file should be able to be run over and over again in a binary without exiting the binary (it should be idempotent).

It is up to the author of the tests as to whether each individual test does all the set up necessary. If each test is atomic, then it makes it easy to highlight the text of an individual test and run it. If lots of tests are sharing common setup, then using a separate procedure to do setup might help. On the negative side, atomic tests usually are longer and have more complicated return results.


Testing Java

Jacl

Jacl is a 100% Java implementation of a subset of Tcl. We use Jacl to test Java by writing Tcl code that exercises the Java classes.

Running the tests

To run the all the tests, do cd $PTII; make tests

If you run make in a test directory that contains tests written in Tcl for testing Java classes, then the 'right thing' should just happen.

If you are running in Eclipse:

  1. In Eclipse, go to Run -> Debug Configurations
  2. Select Java Application and then click the New icon.
  3. In the Main tab, set the "Name:" to ptjacl, in "Main class:", enter tcl.lang.Shell.
  4. Optional: In the Arguments tab, under "Program arguments", enter alljtests.tcl or any individual test tcl file. (E.g. SimpleDelay.tcl). Or, leave the "Program arguments" field blank and when ptjacl is running (see below), enter text in to the Eclipse console.
  5. Optional: In the Arguments tab, under "VM arguments", enter -Dptolemy.ptII.dir=your PtII directory
    (E.g. -Dptolemy.ptII.dir=c:/hyzheng/ptII).
    In case your directory path contains spaces, you need to use quotes. (E.g. -Dptolemy.ptII.dir="c:/my workspace/ptII").
  6. In the "Working directory:" pane, select "Other:", browse to the directory containing the tcl tests.
    (E.g. C:\hyzheng\ptII\ptolemy\domains\de\lib\test)
  7. Select Debug.

The nice thing of using Eclipse is that you can very easily locate where the exception is thrown by clicking the classes listed in the stack trace. You may further register a breakpoint to do more diagnosis.

Writing Tests for Java

Below we discuss some of the details of writing tests in Tcl that test Java classes.

Simple Example

Jacl allows us to instantiate objects in a class and call public methods. We use Jacl and the standard Tcl test bed to create tests. In the example below, we call java::new to create an instance of the Java NamedObj class. We can then call public methods of NamedObj by referring to the Java object handle $n:

      test NamedObj-2.1 {Create a NamedObj, set the name, change it} {
          set n [java::new pt.kernel.NamedObj]
          set result1 [$n getName]
          $n setName "A Named Obj"
          set result2 [$n getName]
          list $result1 $result2
      } {{} {A Named Obj}}
    

Checking Known Failed Test Results

Note that you can combine the Tcl tests and the MoML tests by calling createAndExecute. Even better, you can test for specific error messages with:
      test SDFSchedulerErrors-1.0 {} {
          catch {createAndExecute "rateConsistency.xml"} errorMessage
          list $errorMessage
      } {{ptolemy.kernel.util.IllegalActionException: Failed to compute schedule:
      in .rateConsistency.SDF Director
      Because:
        No solution exists for the balance equations.
        Graph is not consistent under the SDF domain detected on external 
        port .rateConsistency.actor.port2}}
    

Java Tcl Test Files

It is best if each Java class has a separate Tcl file that contains tests. The base of the name of the Tcl test file should be the same of the Java class being tested. The Tcl test file should be located in the test subdirectory of the directory where the Java class is defined.

For example, if we are testing NamedObj.java, then the Tcl test file should be at test/NamedObj.tcl.

Code Coverage

We use ant and Cobertura for code coverage. See $PTII/doc/coding/ant.htm for information about ant.

For code coverage, see http://chess.eecs.berkeley.edu/ptexternal.


Testing Documentation

The Ptolemy II documentation is written in HTML. There are several tools that can be used.

wget

The wget program can be used to craw the html pages of the release when it is on a website.
On the website, create a temporary top level $PTII/index.htm that includes a link to doc/index.htm
run wget
      wget -np -m http://ptolemy.eecs.berkeley.edu/ptolemyII/release/index.htm >& wget.out
    
This will generate lots of files in a ptolemy.eecs.berkeley.edu directory. This directory can be removed:
      rm -rf ptolemy.eecs.berkeley.edu
    
Look for "Not Found"
      awk '{ if ($0 ~ /Not Found/) { print lineTwo} else {lineTwo = lineOne; lineOne=$0}}' wget.out | uniq | awk '{print $NF}'| grep -v '%5C' | sort  
    

weblint

Weblint tells the user about html errors. Weblint was once obtained from ftp://ftp.cre.canon.co.uk/pub/weblint/weblint.tar.gz but has since moved, try using google.
To run weblint:
      cd $PTII
      make weblint
    

htmlchek

Htmlchek is another tool that tells the user about html errors. htmlchek also checks for bad links. The htmlchek output is a little hard to read, so we tend to use weblint for checking individual files. htmlchek was once available at ftp://ftp.cs.buffalo.edu/pub/htmlchek/ but has since moved, try using google.

The best way to run htmlchek is to create a sample distribution, create the files in the codeDoc directory and then run htmlchek

  1. Create the test distribution:
              cd /users/ptII/adm/gen-latest; make htmlchek
            
  2. Reset PTII to point to the test distribution:
              setenv PTII /users/ptII/adm/dists/ptII-latest
              cd PTII
            
  3. Run make install. This will make the Itcl HTML docs twice, which will populate the doc/codeDoc directories. You need to make the Itcl HTML docs twice so that the cross references are correct.
  4. Run make htmlchek
The output ends up in five files

All of the references in htmlchekout.HREF that point to .html files should be checked. References to non-HTML files appear in htmlchekout.HREF because the non-HTML files were not included in the list of files that htmlchek ran on. One quick way to search all the the *.html files is

      cd $PTII
      grep mystring `find . -name "*.html" -print`
    

Spell checking

$PTII/util/testsuite/ptspell is a Ptolemy II specific spelling checker. ptspell has the following features:

Checking the spelling in all the HTML files can be done with:

      cd $PTII
      ptspell `find . -name "*.html" -print`
    

Spell check the comments in the demos

      cd $PTII
      adm/bin/ptIItxtfiles > /tmp/f
      grep demo /tmp/f | grep .xml > /tmp/m
      ptspell `cat /tmp/m
    

Check the distribution for bogus files

Run the following makefile rules and commands:
make realclean
This will remove the tclIndex files and the files in doc/codeDoc. The reason to remove the codeDoc files is so that we don't ship HTML files for any classes that have been removed.
make install
This will recreate the tclIndex files and the doc/codeDoc files.
make checkjunk
Look for files in the distribution that should not be there.
adm/bin/chkgifs
This file looks for gif files that are not used by HTML files in the distribution.

Testing XML

The parser we use in $PTII/com/microstar is a non validating parser. If you are writing MoML code, you might want to run your file through a validating parser, below are a few references:

Proofreading

Below are some guidelines on proofreading documentation
  1. Proofreaders should write their names on the front page of the document.
  2. In general, write big, and use a red pen.
  3. Each page that has a typo should have a mark at the top of the page so that editors can easily find the typo.
  4. Proofreading symbols can be found athttp://webster.commnet.edu/writing/symbols.htm

Runtime Tests

  1. It is easier to work with the Webstart version and check for missing files than it is to work with the installer.
    Install X10 and rerun cd $PTII;./configure
    Build a webstart version with
              cd $PTII
              ant jars
              make jnlp_all
            
    Invoke Webstart by pointing your browser at $PTII/vergil.jnlp.
  2. Use the about::copyright facility to test for missing files and models that are the wrong size.

How to test the installer

For each case
  1. Install with the included JVM but don't include the sources
  2. Install without the included JVM but don't include the sources
  3. Install with the included JVM but include the sources
  4. WebStart
Do the following:
  1. Start up all the menu choices, verify that the initial screen has the right version number
  2. Start up vergil, check the copyrights by expanding the configuration and run all the demos.
Other things to try
  1. Build the sources that are included in the installer
              cd c:/Ptolemy/ptII11.0.devel
              export PTII=c:/Ptolemy/ptII11.0.devel
              ./configure
              ant
              ant tests  >& tests.out &
            
  2. Run diff against old versions