Testing Ptolemy II

This page is primarily for Ptolemy II Developers. Some of the commands mentioned below are not included in the Ptolemy II distribution.

Contents:

  • Test Suite
  • Testing Java
  • Testing Documentation
  • Testing XML
  • Proofreading
  • Runtime Tests
  • Installer
  • Test Suite

    We have included regression tests for most of the Ptolemy II code. Usually, wherever there is Java file, the tests are in the test directory.

    Running the tests

    The tests themselves are written in Tcl, and use Jacl which is a 100% Java implementation of a subset of Tcl.

    Resources:

  • Tcl Primer - A quick summary of Tcl
  • The java:: man page
  • We ship Jacl as a jar file called $PTII/lib/ptjacl.jar.

    make tests will run the tests in the current directory and any subdirectories.

    Writing your own tests

    There are two ways to write tests:
    1. Use Vergil to create tests using the Test actor
    2. Write tests using Tcl

    Using Vergil to write tests

    The testing infrastructure will automatically run any MoML models located in test/auto directories. (Nowhere do the names of these MoML files need to be listed in order for them to be run.)

    However, said infrastructure has to be re-built in each new directory containing tests.

    Note that MoML models used for testing should follow the following conventions:

  • Models in domains/yourdomain/kernel/test/auto should not use actors in domains/yourdomain/lib.
    The reason is that these tests are really testing the domain actors, not the kernel. During the nightly build, the testsuite runs in the kernel directories before running in the lib directories, so the actors in lib will not yet be built.
  • Models that use more than one domain should be located in domains/yourdomain/test/auto.
    The reason is that all the domains might not be built if the test is in lib/test/auto or kernel/test/auto.
    Also, multi domain tests tend to be integration tests, not unit tests.
  • There should be no MoML files (and no test/auto directories) in ptolemy/kernel and its subdirectories. The tests in ptolemy/kernel and subdirectories should not use code from ptolemy/domains
    The reason is that ptolemy/moml and the domains is not yet built. Again, we want unit tests of the kernel, not tests of moml and the domains here.
  • All MoML test models should not use actors from ptolemy.actor.lib.gui because these gui actors will not work during the nightly build when the models are run without a display.
  • To create the infrastructure for a new test directory, do the following:
  • Choose an existing test/ directory that contains an auto/ directory.
    Use this as an example when creating the new test directory.
    A good example is $PTII/ptolemy/actor/lib/test.
  • Create your test/ and test/auto/ directories.
    mkdir test test/auto
    
  • cd to your test/ directory.
    cd test
    
  • Copy over the testDefs.tcl and the makefile file from the example directory chosen in step 1 above.
    cp $PTII/ptolemy/actor/lib/test/testDefs.tcl
    cp $PTII/ptolemy/actor/lib/test/makefile .
    
  • Modify these two files to fit your situation, which may differ from the example situation. In particular:
    testDefs.tcl
    Adjust the value of PTII.
    if {![info exist PTII]} {
        # If we are here, then we are probably running jacl and we can't
        # read environment variables
        set PTII [file join [pwd] .. .. .. .. ]
    }
    
    The .. .. .. .. is the relative path to the top of the Ptolemy II tree.
    makefile
    Adjust ME = and ROOT =
    The auto directory is listed in the MISC_FILES section:
    MISC_FILES =	alljtests.tcl \
    		auto
    

    The test/makefile should include a line that invokes test_auto when the test_jsimple rule is invoked:
    test_jsimple: $(EXTRA_SRCS) jclass $(KERNEL_TESTDEFS) alljtests.tcl test_auto
    

    Note: dummy.tcl may appear in the makefile, which some people find confusing. The test makefile structure supports running graphical and non-graphical Tcl tests. If a particular directory does not have graphical or non-graphical Tcl tests, then we set the value of JGRAPHICAL_TESTS or JSIMPLE_TESTS to include dummy.tcl so that when the makefile expands JGRAPHICAL_TESTS or JSIMPLE_TESTS there will be a value there instead of an empty value. However, if either JGRAPHICAL_TESTS or JSIMPLE_TESTS are set to dummy.tcl and not referred to as a dependency, then you need not have a dummy.tcl file.
    For example, we have no graphical tests, so the makefile might look like:
    # Graphical Java tests that use Tcl.
    # If there are no tests, we use a dummy file so that the script that builds
    # alljtests.tcl works.  If you add a test, be sure to add
    # $(JGRAPHICAL_TESTS) to EXTRA_SRCS
    JGRAPHICAL_TESTS = \
    	dummy.tcl
    
    EXTRA_SRCS =	$(TCL_SRCS) $(JSRCS) $(JSIMPLE_TESTS) #$(JGRAPHICAL_TESTS)
    
  • Now go up one directory:
    cd ..
    
    There needs to be a makefile here too. It needs to name the test/ directory you created. If there already is a makefile, edit it. If there is not a makefile, then copy the makefile from a similar directory elsewhere in the tree.
    If the test directory was added, the add test to the DIRS and MISC_FILES lines:
    DIRS =		kernel lib demo doc test
    ...
    MISC_FILES =	kernel lib doc test
    
  • If no makefile exists in the directory above test/, you will need to create one and repeat this procedure in the next directory up until you find an existing makefile.
  • If you are a member of the UC Berkeley Ptolemy group, and would like to add your directory to the nightly build, see the instructions in http://www.gigascale.org/ptolemy/nightly
  • When you have done all this, the tests in your new test/auto directory ought to run in the nightly build.

    The test passes if it does not throw an exception

    The Test actor (located under "more libraries") can be used to compare the first few results of a simulation with a known good results. If the comparison fails, then the test fails.

    If a test is in the optional test/auto/knownFailedTests directory, then it will be marked as a known failure if it fails. (For more information, see Checking Known Failed Test Results below).

    Using Vergil to write tests is quite a bit easier than writing Tcl code, but it is much more difficult to handle corner cases and test for erroneous conditions by writing models. Tcl tests are unit tests, whereas tests that use models are system tests and may mask unit test bugs.

    Writing Tcl Tests

    The test suite infrastructure is based on the Tcl test suite code. The

    The file $PTII/util/testsuite/testDefs.tcl defines the Tcl proc test.

    test takes five arguments:

    1. The name of the test, for example: foo-1.0
      The name of the test should strictly follow the format below. The Tcl tests that come with the Tcl distribution follow a similar format, so unless there is a strong need to not follow the format, please stick with what works.
    2. The first part of name of the test should reflect the command that is being tested.
    3. The test number should be separated by a dash '-'
    4. Each test number consists of a major value and a minor value, separated by a dot. Usually the major value changes as different parts of the command are being tested. The minor value changes for different tests for the particular part of the command under test.
    5. Test numbers usually start with 1, though if you are doing setup, you can start with 0.
    6. If you go back later and want to stick a test in between foo-1 and foo-2, you can always call your new test foo-1.1
    7. The test description, usually a single sentence.
    8. The contents of the test, usually Tcl code that does the action to be tested. The last line of the contents should return a value.
    9. The results to be compared against.
    10. The last argument is optional and determines what sort of test is being run. The default value is NORMAL, which means that the test should pass under normal conditions. If the value is KNOWN_FAILED, then the test is expected to fail, but eventually will be fixed. By using KNOWN_FAILED, developers can mark tests that they know are failing, which will save other developers from attempting to debug known problems.
    Below is a sample piece of code that sources the testDefs.tcl file and then runs one test. The code below has the incorrect value return results to be compared against, so the test suite properly indicates that the test failed.
    if {[string compare test [info procs test]] == 1} then {
        source [file join $PTII util testsuite testDefs.tcl]
    } {}
    test testExample-1.1 {This is the first test example, it does very little} {
    	catch {this is an error} errMsg1
    	set a "this is the value of a"
    	list $errMsg1 $a
    } {{invalid command name "this"} {this is NOT the value of a}}
    

    Parts of a test file

    Test files should be located in the test directory.

    It is better to have many small test files as opposed to a few large test files so that other developers can quickly find the tests for the class they are working with. Usually tests for the class Foo are found in the file test/Foo.tcl

    Each test file should have the following parts:

    1. The Copyright
    2. The code that loads the test system package
      if {[string compare test [info procs test]] == 1} then {
          source testDefs.tcl
      }
      
      Each directory contains a testDefs.tcl file which in turn sources $PTII/util/testsuite/testDefs.tcl. The idea here is that if the test framework changes, each test file need not be updated.
    3. A line that the user can uncomment if they want the test system to produce verbose messages:
      #set VERBOSE 1
      
    4. The individual tests, which should loosely follow the Ptolemy II file format standard:
      ############################################################################
      #### Foo
      test Foo-1.1 {Test out Foo} {
      
      } {}
      

    Test Styles

    There are two types of tests:
    1. Tests that handle all necessary setup in each individual test.
    2. Tests that rely on the earlier tests to do setup.
    In general, each test file should be able to be run over and over again in a binary without exiting the binary (it should be idempotent).

    It is up to the author of the tests as to whether each individual test does all the set up necessary. If each test is atomic, then it makes it easy to highlight the text of an individual test and run it. If lots of tests are sharing common setup, then using a separate procedure to do setup might help. On the negative side, atomic tests usually are longer and have more complicated return results.


    Testing Java

    Jacl is a 100% Java implementation of a subset of Tcl. We use Jacl to test Java by writing Tcl code that exercises the Java classes.

    Running the tests

    To run the all the tests, do cd $PTII; make tests

    If you run make in a test directory that contains tests written in Tcl for testing Java classes, then the 'right thing' should just happen.

    If you are running in Eclipse, see Debugging Tcl Tests.

    Writing Tests for Java

    Below we discuss some of the details of writing tests in Tcl that test Java classes.

    Simple Example

    Jacl allows us to instantiate objects in a class and call public methods. We use Jacl and the standard Tcl test bed to create tests. In the example below, we call java::new to create an instance of the Java NamedObj class. We can then call public methods of NamedObj by referring to the Java object handle $n:

    test NamedObj-2.1 {Create a NamedObj, set the name, change it} {
        set n [java::new pt.kernel.NamedObj]
        set result1 [$n getName]
        $n setName "A Named Obj"
        set result2 [$n getName]
        list $result1 $result2
    } {{} {A Named Obj}}
    

    Checking Known Failed Test Results

    Note that you can combine the Tcl tests and the MoML tests by calling createAndExecute. Even better, you can test for specific error messages with:
    test SDFSchedulerErrors-1.0 {} {
        catch {createAndExecute "rateConsistency.xml"} errorMessage
        list $errorMessage
    } {{ptolemy.kernel.util.IllegalActionException: Failed to compute schedule:
       in .rateConsistency.SDF Director
    Because:
    No solution exists for the balance equations.
    Graph is not consistent under the SDF domain detected on external 
    port .rateConsistency.actor.port2}}
    
    

    Java Tcl Test Files

    It is best if each Java class has a separate Tcl file that contains tests. The base of the name of the Tcl test file should be the same of the Java class being tested. The Tcl test file should be located in the test subdirectory of the directory where the Java class is defined.

    For example, if we are testing NamedObj.java, then the Tcl test file should be at test/NamedObj.tcl.

    JavaScope

    We use Sun's JavaScope test coverage tool as part of our testing environment. Unfortunately, on November 5, 1999, Sun decided to discontinue support of JavaScope. In the short term, we are continuing to use JavaScope.

    Here's how to review the test suite code coverage:

    1. Run:
      cd $PTII/ptolemy/kernel
      make jsall
      
      The jsall makefile rule does the following:
    2. JavaScope uses a program called jsinstr to instrument the Java files. jsinstr copies the original files to the jsoriginal directory and then adds Java function calls to copies of the files. These function calls increment counters in a database at runtime.
    3. To compile the instrumented classes the JavaScope.zip file is added to the CLASSPATH and then the classes are recompiled.
    4. The test suite is run.
    5. To views the code coverage run either jsreport, jssummary or javascope:
      jssummary -HTML -PROGRESS -OUTFILE=/tmp/summary
      jsreport -HTML -PROGRESS -RECURSIVE -OUTDIR=/tmp/report
      
    6. To restore the files back to the original state, run make jsrestore.
      make jsrestore
      

    JavaScope Details

  • jsinstr saves the original Java files to the jsoriginal directory. The jsall makefile rule checks to see if this directory is present, and if it is not, runs jsinstr. The jsrestore rule runs jsrestore and then attempts to remove the jsoriginal directory so that the next run of the jsall rule will recreate it. If you are running jsinstr by hand on files that are not in the makefile, then you may find it necessary clean out the jsoriginal directory with:
    jsrestore *.java
    rm jsoriginal/README
    rmdir jsoriginal
    
  • Javascript writes the code coverage information to ~/jsdatabase. You might find it helpful to remove this directory periodically.
  • Flushing the JavaScope database

    The JavaScope database must be flushed by hand at the end of a run, or the code coverage data of the run will not be dumped out to disk.

    There are two ways to do this.

    1. The IFLUSHCLASS option will flush out the coverage information for the current class at the end of each method.
    2. The IFLUSH option will flush out all the coverage information at the end of each method.
    Obviously, using IFLUSH will take more time than using IFLUSHCLASS.

    The jsintr options are set by consulting the following resources in order.

    1. system properties - environment variables
    2. $HOME/javascope.properties
    3. ./javascope.properties in the current directory.
    4. Options can be embedded in files as comments:
      /*jsoptions: ...*/
      
      Embedded comment options can be set more than once in a file. Embedded comment options are in effect until is the option is changed by another embedded comment option.
    5. Command line arguments to jsinstr.

    The doneTests proc of $PTII/util/testsuite/testDefs.tcl includes a call that will flush the database:

        catch {java::call COM.sun.suntest.javascope.database.js\$ flush}
    

    Testing Documentation

    The Ptolemy II documentation is written in HTML. There are several tools that can be used.

    wget

    The wget program can be used to craw the html pages of the release when it is on a website.
    On the website, create a temporary top level $PTII/index.htm that includes a link to doc/index.htm
    run wget
    wget -np -m http://ptolemy.eecs.berkeley.edu/ptolemyII/majorRelease/minorRelease/ >& wget.out
    
    This will generate lots of files in a ptolemy.eecs.berkeley.edu directory. This directory can be removed:
    rm -rf ptolemy.eecs.berkeley.edu
    
    Look for "Not Found"
    egrep 'http:|Not Found' wget.out
    

    weblint

    Weblint tells the user about html errors. Weblint was once obtained from ftp://ftp.cre.canon.co.uk/pub/weblint/weblint.tar.gz but has since moved, try using google.
    To run weblint:
    cd $PTII
    make weblint
    

    htmlchek

    Htmlchek is another tool that tells the user about html errors. htmlchek also checks for bad links. The htmlchek output is a little hard to read, so we tend to use weblint for checking individual files. htmlchek was once available at ftp://ftp.cs.buffalo.edu/pub/htmlchek/ but has since moved, try using google.

    The best way to run htmlchek is to create a sample distribution, create the files in the codeDoc directory and then run htmlchek

    1. Create the test distribution:
      cd /users/ptII/adm/gen-latest; make htmlchek
      
    2. Reset PTII to point to the test distribution:
      setenv PTII /users/ptII/adm/dists/ptII-latest
      cd PTII
      
    3. Run make install. This will make the Itcl HTML docs twice, which will populate the doc/codeDoc directories. You need to make the Itcl HTML docs twice so that the cross references are correct.
    4. Run make htmlchek
    The output ends up in five files
  • htmlchekout.ERR - HTML usage errors
  • htmlchekout.NAME - Locations in the specified files that ware not referenced by any of those files
  • htmlchekout.HREF - References from the specified files that are not found in the files. This file is by far the most important file to look at.
  • htmlchekout.SRC - References to online images.
  • htmlchekout.MAP - Cross dependency information.
  • All of the references in htmlchekout.HREF that point to .html files should be checked. References to non-HTML files appear in htmlchekout.HREF because the non-HTML files were not included in the list of files that htmlchek ran on. One quick way to search all the the *.html files is

    cd $PTII
    grep mystring `find . -name "*.html" -print`
    

    Spell checking

    $PTII/util/testsuite/ptspell is a Ptolemy II specific spelling checker. ptspell has the following features:
  • It uses $PTII/util/testsuite/ptlocaldict as a local dictionary of acceptable words that are not in the regular system dictionary. ptlocaldict is kept in ASCII sort order.
  • ptspell splits words up that contain embedded capital letters and then runs spell again. Thus, ptspell can report spelling problems in variable, method and class names. This mechanism also reduces the number of words that are reported as misspelled because the word consists of two words stuck together.
  • If /usr/local/bin/ispell is present, then it will use it. If you are running under Windows with Cygwin, you can download a prebuilt version of ispell from http://ptolemy.eecs.berkeley.edu/tycho/tychoTools.htm
  • Checking the spelling in all the HTML files can be done with:

    cd $PTII
    ptspell `find . -name "*.html" -print`
    

    Spell check the comments in the demos

    cd $PTII
    adm/bin/ptIItxtfiles > /tmp/f
    grep demo /tmp/f | grep .xml > /tmp/m
    ptspell `cat /tmp/m
    

    Check the distribution for bogus files

    Run the following makefile rules and commands:
    make realclean
    This will remove the tclIndex files and the files in doc/codeDoc. The reason to remove the codeDoc files is so that we don't ship HTML files for any classes that have been removed.
    make install
    This will recreate the tclIndex files and the doc/codeDoc files.
    make checkjunk
    Look for files in the distribution that should not be there.
    adm/bin/chkgifs
    This file looks for gif files that are not used by HTML files in the distribution.

    Testing XML

    The parser we use in $PTII/com/microstar is a non validating parser. If you are writing MoML code, you might want to run your file through a validating parser, below are a few references:
  • wwww.hcrc.edu.ac.uk
  • Yahoo HTML Validators
  • Proofreading

    Below are some guidelines on proofreading documentation
    1. Proofreaders should write their names on the front page of the document.
    2. In general, write big, and use a red pen.
    3. Each page that has a typo should have a mark at the top of the page so that editors can easily find the typo.
    4. Proofreading symbols can be found athttp://webster.commnet.edu/writing/symbols.htm

    Runtime Tests

    1. It is easier to work with the Webstart version and check for missing files than it is to work with the installer.
      Install X10 and rerun cd $PTII;./configure
      Build a webstart version with
      cd $PTII
      make install jnlp_all
      
      Invoke Webstart by pointing your browser at $PTII/vergil.jnlp
    2. Use the about::copyright facility to test for missing files and models that are the wrong size.

    How to test the installer

    For each case
    1. Install with the included JVM but don't include the sources
    2. Install without the included JVM but don't include the sources
    3. Install with the included JVM but include the sources
    4. WebStart
    Do the following:
    1. Start up all the menu choices, verify that the initial screen has the right version number
    2. Start up vergil, check the copyrights by expanding the configuration and run all the demos.
    Other things to try
    1. Build the sources that are included in the installer
      cd c:/Ptolemy/ptII5.1
      export PTII=c:/Ptolemy/ptII5.1
      ./configure
      make fast install tests >& make.out &
      
    2. Run diff against old versions

    Last Updated: $Date: 2007/01/14 04:32:34 $