The diva.whiteboard demo
How to run the demo
The whiteboard demo is similar to the sketch demo, but it supports more standard editor features such as multiple
pages, choice of pen colors and widths, etc. In addition to the default command recognition, the whiteboard also
has an application specific recognition engine embedded in it which recognizes component editing.
When the program starts up, it shows an empty page on the whiteboard canvas. The user can add more pages by
clicking on the new page icon on the toolbar and traverse the pages using the previous and next
buttons. The curved arrow button is used to undo the page actions (add, delete, previous and next page actions).
Pen colors and widths can be selected from the choice boxes.
The recognition engine is trained to recognized squares as nodes. It also understands simple component editing:
a component can have input and output ports attached to it; a network is a special type of component which contains
other components or networks. As shown in the snapshot above, networks are colored in pink, components are yellow,
input ports are red, and output ports are blue.
When a square (s1) is drawn on the canvas and is recognized as a node, it will be colored in with gray. If a
smaller square (s2) is drawn on the left edge of the node (s1), s1 will be recognized as a component with an input
port (s2) attached to it. Similarly, a square drawn on the right edge of the node will be recognized as an output
port. If a square encloses a component, it will be recognized as a network. A wire can be created only between
an output and and input ports.
|
How to draw a square
|
The recognition algorithms are still under development and prone to occasional mistakes. If it doesn't recognize
a sketch correctly, cross it out and try again!
Note
The gesture recognition in this demo currently support single-stroke gestures, which means that in order for
a gesture to be recognized it has to be completed in one stroke (a path formed from pen down to pen up). Also the
recognition is direction-dependent, therefore a gesture has to be drawn in the same way as it is trained. The lack
of multi-stroke gestures and direction independence is not a shortcoming of the recognition archicture, but rather
this particular recognizer implementation. See the package documentation for details.
|