windows-nt/Source/XPSP1/NT/multimedia/danim/doc/spec/old/intro.sgm
2020-09-26 16:20:57 +08:00

856 lines
55 KiB
Plaintext

<chp id=avr_intro><ti>A Brief Introduction to ActiveVRML
<if rid=HTML>
<h1><ti></ti>
<p>This document introduces the <e>Active Virtual Reality Modeling Language</e> (ActiveVRML), a modeling language for specifying interactive animations.
<list>
<i><art name=yeldot.gif align=inline></art><xref rid=intro></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=user_interaction></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=media></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=time_transforms></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=formats></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=diff_integration></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=comp_spec></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=lang_int></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=scoped_name></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=distrib></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=param></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=optim></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=behavior></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=conclusion></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=sound></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=appendix></xref>
<i><art name=yeldot.gif align=inline></art><xref rid=reactivity></xref>
<d><art name=yeldot.gif align=inline></art><xref rid=footnotes></xref>
</list>
<endif rid=HTML>
<h1 id=intro><ti>Introduction
<p>To allow the creation of interactive animations to be as natural as possible, ActiveVRML is based on a simple and intuitively familiar view of the world; that is, as a hybrid of continuous variations and discrete events. For example, the behavior of a bouncing ball consists of continuous trajectories and discrete collisions. Trajectories cause collision events, and collision events cause new trajectories.
<p>Using ActiveVRML, one can create simple or sophisticated animations without programming in the usual sense of the word. For example:
<ul>
<li>Although many frames are generated in presenting an animation, the author is freed from any notion of sampling or frame generation, but rather describes how various animation parameters vary continuously with time, user input, and other parameters.
<li>An author describes events influencing an animation and the effects of these events on the animation. The author is freed from the programming mechanics of checking for events and causing the effects to happen.
<li>Although animations involve an extremely high degree of simultaneity (concurrency), the author is freed from such programming issues as multi-threading.
<li>Linguistically, there are no statements (commands) that are executed for their effect, but rather expressions that are analyzed for their value. ActiveVRML uses this approach to make specifying animations as natural as possible, while simultaneously retaining maximal opportunities for optimization.1
</ul>
<p>While ActiveVRML is modeling language, it exploits three of the key ideas that give programming languages their tremendous power:
<ul>
<li><e>Composition</e>. Animations are built of simpler animations in a modular, building block style. By applying composition repeatedly, complex animation can be constructed, while each layer of description remains manageable.
<li><e>Parameterization</e>. Families of related animations can be defined in terms of parameters of any kind, including other animations.
<li><e>Scoped naming</e>. Animations and animation families can be given names to facilitate readability and convenient reuse. The naming of an animation can be explicitly limited, or <e>scoped</e>, so as not to conflict with possibly unrelated uses of the same name elsewhere in a description.
</ul>
<p>ActiveVRML applies these principles pervasively to all types of static models, continuous behaviors, and discrete events.
<p>To make the discussion of ActiveVRML more concrete, the first few sections of this document use a running example&em;a solar system that begins as a single static planet, and then adds animation, other planets, and sound.
<p>The remainder of this document is organized as follows. We first outline the media types and operations. Next, we describe how ActiveVRML complements other Internet-standard file formats by supporting importation. We then illustrate the keys ideas of composition, parameterization, and scoped naming. Next, we introduce <e>behaviors</e>, which are time-varying values of all types. We then show how to add spatialized sound to a model. Next, we explain <e>reactivity</e> and the various kinds of events that support reactivity. We then describe support for user interaction. Next, we illustrate the principle of time transformation, which provides temporal modularity. We then briefly describe the built-in support for behaviors defined in terms of rates of change. Finally, we develop as an extended example, a collection of balls bouncing around in a box.
<h1 id=media><ti>Overview of Supported Media Types
<p>The ActiveVRML Reference Manual describes the complete set of types and operations supported by ActiveVRML 1.0. This section provides a brief overview. (All of the types and operations are time-varying.)
<ul>
<li><e>3-D geometry</e>. Supports importation, aggregation and transformation. Also supports texture mapping of interactive animated images, manipulation of color and opacity, and embedding of sounds and lights.
<li><e>Images</e>. Provides infinite resolution and extent images. Supports importation, 2-D transformation, opacity manipulation, and overlaying. Also supports rendering an image from a 3-D model and rendering an image out of rich text. Even geometrical and image renderings have infinite resolution and extent, since discretization and cropping are left to the display step, which is always left implicit.
<li><e>Sound</e>. Rudimentary support for importing, manipulating, and mixing sounds. Also, <e>sonic rendering</e> of 3-D models; that is, geometric models may be listened to as well as looked at. Conceptually infinite sampling rate and sample precision.
<li><e>Montages</e>. Composite 2 1/2-D images, supporting convenient, multi-layered cel animation.
<li><e>2-D and 3-D points and vectors</e>. Operations include vector/vector and point/vector addition, point subtraction, scalar/vector multiplication, and dot and cross products. Also supports construction and deconstruction in rectangular and polar/spherical coordinates.
<li><e>2-D and 3-D transforms</e>. Supports translate, scale, rotate, shear, identity, composition, inversion, and matrix-based construction. Can be extended to non-linear deformations, and so forth.
<li><e>Colors</e>. Various constants, construction, and deconstruction in RGB and HSL color spaces.
<li><e>Text</e>. Rudimentary support for formatted text, with color, font family, optional bold, and italic. If there are Internet standards for rich text, then we would like to support importation as well.
<li><e>Miscellaneous</e>. Support for numbers, characters, and strings.
</ul>
<h1 id=formats><ti>Embracing Existing Formats
<p>There is an enormous amount of raw material available today, both commercially and freely on the Internet, that can be used as a starting point for constructing interactive animations. This material is in files of many different formats representing geometry, images, video, sound, animation, motion paths, and so forth. ActiveVRML works with these representations directly, rather than requiring authors to create raw material specifically for ActiveVRML, or even converting existing material into a new format.
<p>For our solar system, we start with a VRML 1.0 model of a unit sphere and an earth texture in GIF format. We import this content into ActiveVRML by means of import, and name the results for later use.2
<ex>sphere = first(import("sphere.wrl"));
earthMap = first(import("earth-map.gif"));
</ex>
<p>Each of these two lines is a <e>definition</e>, which both introduces a new name and provides an expression for the value of that name. The modeling notion of definition differs from the programming notion of <e>assignment</e>, in that the association between name and value established by a definition holds throughout a model's lifetime. Authors, readers, and automatic optimizers can thus know from seeing a definition like the first one above that sphere will always be the suggested imported model.
<p>All names are typed, but types are almost always inferred automatically by ActiveVRML, and so rarely need to be specified explicitly. These two definitions implicitly declare sphere to be of type <k>geometry</k>, and earthMap to be of type <k>image</k>.
<h1 id=comp_spec><ti>Compositional Specification
<p>As mentioned in the introduction to this document, <e>composition</e> is the building-block style of using existing models to make new models, combining the resulting models to make more new models, and so on.
<p>To start building our earth geometry, we apply the earth texture to our earth sphere.
<ex>unitEarth = texture(earthMap, sphere);
</ex>
<p>In our solar system, we will take the Sun's radius to be one unit, and the earth's to be half as big. Given the texture-mapped unit sphere, we first make a transform that scales by one half, uniformly.
<ex>halfScale = scale3(0.5);
</ex>
<p>Now we can form the reduced sphere by applying the halfScale transform to the texture-mapped unit sphere:
<ex>earth = transformGeometry(halfScale, unitEarth);
</ex>
<p>Next we want to reposition the earth, so that it will apart from the sun. We make a translation transform and then apply it to the earth:
<ex>moveXby2 = translate(2,0,0);
movedEarth = transformGeometry(moveXby2, earth);
</ex>
<p>Giving names to transforms, textures, and geometric models at every step of composition leads to descriptions that are tedious to read and write. In ActiveVRML, naming and composition are completely independent, so the author is free to choose how much and where to introduce names, based on the author's individual style and intended reuse.
<p>For example, we can name only the imported sphere and texture and the complete moved earth, as in the following description, which is equivalent to the previous one but does not introduce as many names:
<ex>sphere = first(import("sphere.wrl"));
earthMap = first(import("earth-map.gif"));
movedEarth =
transformGeometry(translate(2,0,0),
transformGeometry(scale3(0.5),
texture(earthMap, sphere)));
</ex>
<p>Next we build a model of the sun. No transformation is required, but we do want it to be yellow:
<ex>sun = diffuseColor(yellow, sphere);
</ex>
<p>To complete our first very simple solar system, we simply combine the sun and moved earth into one model, using the infix union operation, which takes two geometric models and results in a new, aggregate model.
<ex>solarSystem1 = sun union movedEarth;
</ex>
<h1 id=scoped_name><ti>Scoped Naming
<p>Naming is useful for making descriptions understandable and reusable, but can easily cause clutter. When intermediate animations are named and then used in only one or a few animations (as might be the case of sun and movedEarth above), they can interfere with available choices for intermediate names in other animations. While this clutter is not a problem with very simple animations described and maintained by a single author, it can become a serious obstacle as complexity grows and separately authored animations are combined to work together.
<p>The solution to name clutter is to explicitly limit the scope of a name's definition. In our example, we will leave the sphere, earthMap, and solarSystem1 definitions unscoped, but limit the scope of the sun and movedEarth definitions.
<p>To limit the scope of a collection of definitions to a given expression, use the form
<ex>let definitions in expression </ex>
<p>(In addition to the given expression, the scope of the definitions include the bodies of all of the definitions themselves, to allow for mutual recursion.)
<ex>solarSystem1 =
let
movedEarth =
transformGeometry(translate(2,0,0),
transformGeometry(scale3(0.5),
texture(earthMap, sphere)));
sun = diffuseColor(yellow, sphere);
in
sun union movedEarth;
</ex>
<p>The scope of movedEarth and sun is the expression in the last line of this definition of solarSystem. Any other potential uses of the names movedEarth and sun would not refer to the scoped definitions above.
<h1 id=param><ti>Parameterization
<p>It is often desirable to create several animations that are similar but not identical. If such models differ only by transformation&em;for instance, if they are translations and orientations of a single model&em;the composition approach is helpful. In general, however, reuse with transform application (which corresponds to the <e>instancing</e> facility commonly found in graphics modeling and programming systems) is a very limited technique.
<p>ActiveVRML goes far beyond instancing by providing a simple but extremely general and powerful form of <e>parameterization</e>. Families of related animations can be defined in terms of parameters of any kind, including other animations.
<p>As an example of parameterization, suppose that we want a variety of simple solar systems differing only in the sun color and an angle of rotation of the earth around the sun. Each of these solar systems has its own color and own rotation angle, but in all other ways is identical to its other family members. We define such a family as follows. (Note that sunColor and earthAngle are parameter names that refer generically to the color and angle that distinguishes one simple solar system from another.)
<ex>solarSystem2(sunColor, earthAngle) =
let
movedEarth =
transformGeometry(rotate(yVector3, earthAngle),
transformGeometry(translate(2,0,0),
transformGeometry(scale3(0.5),
texture(earthMap, sphere))));
sun = diffuseColor(sunColor, sphere);
in
sun union movedEarth;
</ex>
<p>To instantiate a solar system from this family, apply solarSystem2 to a color and an angle. For instance,
<ex>solarSystem2(yellow, 0)
</ex>
<h1 id=behavior><ti>Behaviors
<p>Up to this point, our examples have described <e>static</e> <e>models</e>&em;that is, models that do not vary with time. These models were built compositionally, from static numbers, colors, images, transforms, and other models. In ActiveVRML, one can just as easily express <e>behaviors</e>, that is, time-varying values of all types, with static values being just degenerate versions of the general case.
<p>The simplest non-static behavior is time, which is a number-valued behavior that starts out with value zero and increases at a rate of one unit per second.
<p>As a simple example of a compositionally defined behavior, the following expression describes a number-valued behavior that starts out with value zero and increases at a rate of 2p per second:
<ex>rising = 2 * pi * time;
</ex>
<p>The use of time here refers to a <e>local</e>, not a <e>global</e> notion of time. Just as geometric models are generally specified in spatial local (or <e>modeling</e>) coordinates, behaviors of all types are generally specified in local temporal coordinates, and are then subjected to temporal transformation, as discussed in the section "," and combined with other, possibly temporally transformed, behaviors.
We can use this number behavior to describe a time-varying uniform scaling transform that starts as a zero scale and increases in size:
<ex>growing = scale3(rising);
</ex>
<p>And we can use this growing behavior to describe a geometry-valued behavior, that is, a 3-D animation, such as solar system growing from nothing:
<ex>growingSolarSystem1 = transformGeometry(growing, solarSystem1);
</ex>
<p>As always, intermediate definitions are optional; we could just as well use:
<ex>growingSolarSystem1 =
transformGeometry(scale3(2 * pi * time), solarSystem1);
</ex>
<p>With a slight variation, we could have the scale go back and forth between 0 and 2:
<ex>pulsating =
transformGeometry(scale3(1 + sin(time)), solarSystem1);
</ex>
<p>We can also apply our solarSystem2 family, defined above, to behavior arguments to create time-varying solar systems, as in the following example in which the sun color runs through a variety of hues while the earth rotates around the sun.
<ex>animatedSolarSystem2 =
solarSystem2(colorHsl(time, 0.5, 0.5), 2 * pi * time)
</ex>
<h2><ti>Behaviors as Data Flow
<p>For some people, it is helpful to visualize behaviors as data flow graphs. For example, the animatedSolarSystem2 behavior above can be illustrated as in the figure below. Note that, unlike traditional data flow, behaviors describe a <e>continuous</e> flow of values, not a discrete sequence.
<p><art name="pic01.wmf"></art>
<p>Data flow diagrams, while somewhat helpful for illustrating simple non-reactive behaviors, are much weaker than what can be expressed in ActiveVRML, because of both reactivity and time transformability.
<h2><ti>More Parameterization
<p>We would now like to enrich our solar system in two ways: by making the earth revolve around its own axis, as well as rotate about the sun, and by adding a moon that revolves about its axis and rotates around the earth. Parameterization allows us to capture the similarities between moon and earth, while allowing for their differences.
<p>We start with a simple definition that rotates a given model with a given period:
<ex>rotateWithPeriod(geo, orbitPeriod) =
transformGeometry(rotate(yVector3, 2 * pi * time / orbitPeriod), geo);
</ex>
<p>
<p>We use rotateWithPeriod to create a revolving earth and moon and as a building block for the following definition, which puts models into orbit:
<ex>orbit(geo, orbitPeriod, orbitRadius) =
rotateWithPeriod(transformGeometry(translate(orbitRadius, 0, 0), geo),
orbitPeriod)
</ex>
<p>We can now define our extended solar system:
<ex>solarSystem3 =
let
// constants
sunRadius = 1 // size of the sun
day = 3 // seconds per day
earthRadius = 0.5 * sunRadius // size of earth
earthRotationPeriod = 1 * day
earthOrbitRadius = 2.0 * sunRadius
earthOrbitPeriod = 365 * day
moonRadius = 0.25 * earthRadius // size of moon
moonRotationPeriod = 28 * day
moonOrbitRadius = 1.5 * earthRadius
moonOrbitPeriod = moonRotationPeriod
// sun is a yellow sphere
// earth is a sphere with the earth-map texture
// moon is a gray sphere
sun = transformGeometry(scale3(sunRadius),
diffuseColor(yellow, sphere));
earth = transformGeometry(scale3(earthRadius),
texture(earthMap, sphere);
moon = transformGeometry(scale3(moonRadius),
diffuseColor(rbgColor(0.5,0.5,0.5), sphere));
// define the relationships between and the motions of the bodies
moonSystem = rotateWithPeriod(moon, moonRotationPeriod)
earthSystem =
RotateWithPeriod(earth, earthRotationPeriod) union
orbit(moonSystem, moonOrbitPeriod, moonOrbitRadius)
sunSystem =
sun union
orbit(earthSystem, earthPeriod, earthOrbitRadius)
in
sunSystem
</ex>
<h1 id=sound><ti>Adding Sound
<p>We will now add sound to our solar system example by having the earth emit a "whooshing" sound3. The sound will come from the earth, so as a user moves around in the solar system or as the earth moves around, the user will be able to maintain a sense of the spatial relationship, even when the earth is out of sight. Moreover, if the moon is making a sound as well, the user will hear both sounds appropriately altered and mixed.
<p>All that is necessary to add sound is to change the earth to include a spatially embedded sound; we modify earth in the solarSystem2 definition as follows:
<ex>earth =
transformGeometry(scale3(earthRadius),
texture(earthMap, sphere))
union
soundSource3(first(import("whoosh.au")));
</ex>
<p>The soundSource3 function used here places a sound at the origin in 3-D space, converting it into a geometric model, which can then be transformed and combined with other geometric models.
<p>We can also make sound attributes vary with time. For example, we can adjust the earth sound's pitch so that it fluctuates during the day, as in the following definition. The formula used with pitch below causes the pitch factor to vary between 0.5 and 1.5 and back through the course of a day.
<ex>earth =
transformGeometry(scale3(earthRadius), texture(earthMap, sphere))
union
soundSource3(
pitch(sin(2 * pi * time /day)/2 + 1,
first(import("whoosh.au")));
</ex>
<h1 id=reactivity><ti>Reactivity
<p>In the real world, as well as in computer games, simulations, and other applications of interactive animation, behaviors are influenced by <e>events</e>, and can be modeled as a series of events and reactions (or <e>stimuli</e> and <e>responses</e>). In this document, we refer to behaviors that react to an event as <e>reactive behaviors</e>.
<h2><ti>Simple Reactivity
<p>As a very simple example of a reactive behavior, suppose that we want our solar system's base color to be red at first, but then become green when a user presses the left button on the mouse. We can illustrate this two phase reactive color as follows, where, for succinctness, LBP refers to the event of pressing the left button:
<p><art name="pic02.wmf"></art>
<p>In ActiveVRML, this behavior is expressed as
<ex>twoPhase = red until LBP => green
</ex>
<p>In this example and the following ones, the behavior phases are static values. In general, however, they may be arbitrarily complex behaviors.
<h2><ti>Chaining
<p>When the user presses the left button, twoPhase turns from red to green, and stays green permanently; that is, it is no longer reactive. We can also specify a behavior that is still reactive in its second phase. For example, we can have the solar system's color change to yellow when the user presses the left button for the second time:
<p><art name="pic03.wmf"></art>
<p>In ActiveVRML, this process is expressed as follows:
<ex>threePhase =
red until
LBP => (green until LBP => yellow)
</ex>
<h2><ti>Competing Events
<p>In the twoPhase and threePhase examples, each phase was interested in at most one event (LBP or nothing). Often, a phase reacts to a number of different events, each leading to a different new phase. For instance, we can define a variation of twoPhase that also starts in the red phase, but will react to either a left or right button press with a different new behavior:
<p><art name="pic04.wmf"></art>
<p>where RBP refers to our user's right button press event.
<p>In ActiveVRML, this process is expressed as follows:
<ex>choose =
red until
LBP => green
| RBP => blue
</ex>
<h2><ti>Repetition
<p>Now suppose we want a color that switches back and forth between red and green at each button press, no matter how many times a button is pressed. Describing this repetitive behavior by a chain of single-color phases, as with twoPhase and threePhase, requires a infinite chain. Fortunately, this infinite chain has a succinct description.
<p><art name="pic05.wmf"></art>
<p>In ActiveVRML, this repetitive behavior is expressed as follows:
<ex>cyclic =
red until
LBP => green until
LBP => cyclic
</ex>
<note><ti>Note
<p>As illustrated in this example, ActiveVRML definitions may be self-referential.
</note>
<h2><ti>Hierarchical Reactivity
<p>In the previous three reactive behavior examples, each phase was a simple static color. In general, each phase of a reactive behavior can be an arbitrary behavior, even a reactive one. For example, we may want to present our user with the red/green cyclic behavior above only until the user presses the mouse's right button, at which time the color becomes permanently yellow.
<p><art name="pic06.wmf" ></art>
<p>In ActiveVRML, this process is expressed as follows:
<ex>cyclic until
RBP => yellow
</ex>
<h2><ti>Parametric Reactivity
<p>Sometimes a reactive behavior goes through a sequence of phases that are similar, but not identical. For instance, a game may need to keep track of a player's score. Supposed we have already defined scored to refer to the event of a player scoring a point. (The subject of how events such as scored are defined is addressed later.) A score-keeping behavior can be illustrated as follows:
<p><art name="pic07.wmf" ></art>
<p>Each phase in this score-keeping behavior is similar in that its value is a static number. It is waiting for an occurrence of the scored event, at which time it will switch to a similar phase with one greater value. To define all of these phase behaviors at once, we describe the family parameterized by the only difference among them&em;the current score:
<ex>score(current) =
current until
scored => score(current+1)
</ex>
<p>The behavior that starts counting from 0 is expressed as follows:
<ex>scoreFromZero = score(0)
</ex>
<p>As always, we can limit the scope of the intermediate definition, even for parameterized definitions:
<ex>scoreFromZero =
let
score(current) =
current until
scored => score(current+1)
in
score(0)
Event Data</ex>
<p>Some events have data associated with their occurrences. For example, each occurrence of a key press event has an associated character value. (It would be unwieldy to have a separate event associated with every key on a keyboard.)
<p>As another example of events with data, we can generalize our score-keeping behavior so that each occurrence of the scored event could have its own number of points to be added to the total score. In the new version shown below, the event data generated by the scored event (number of points) is consumed by a parameterized behavior (addPoints below), which adds the number of points to the current score and continues counting.
<ex>score(current) =
let
addPoints(points) =
score(current+points)
in
current until
scored => addPoints
</ex>
<p>As mentioned in the previous section "Compositional Specification," naming is optional. Even parameterized definitions can be replaced by the parameterized behavior itself, using the construct
<ex>function (parameters). expression</ex>
<p>The following definition of score is equivalent to the previous one.
<ex>score(current) =
current until
scoreds => function (points). score(current+points)
</ex>
<h2><ti>The Varieties of Events
<p>The preceding section illustrated a variety of ways to use events to describe behaviors in terms of other behaviors&em;that is, these behaviors are described <e>compositionally</e>. The next few sections examine how to describe the events themselves. As you may have guessed, in ActiveVRML, even events can be described compositionally.
<h3><ti>External Events
<p>Some events originate outside of ActiveVRML; for example, they can originate with a user, such as the left or right mouse button press events in some of our previous reactive examples.
<p>Another example of an external event is a key press. Like a button event, a key press event can occur repetitively, but unlike a button event, key presses have associated data that indicates which character was pressed.
<h3><ti>Predicate-based Events
<p>Another kind of event is one in which a predicate (condition) about model parameters becomes true. For example, in the definition of scoreFromZero given above, the counting behavior goes on forever. We may, however, want to stop counting upon reaching some given maximum; that is, we may want to stop counting when the predicate current = maxScore becomes true for a given maxScore. Predicate-based events are written as
<ex>predicate(condition_expression)</ex>
<p>as in the following replacement for scoreFromZero.
<ex>scoreUpTo(maxScore) =
let
score(current) =
current until
scored => score(current+1)
| predicate(current = maxScore) => current
in
score(0)
</ex>
<note><ti>Note
<p>In the context of a predicate, the equal sign (=) means equality, not definition.
</note>
<p>Alternatively, we could define scoreUpTo in terms of the scoreFromZero.
<ex>scoreUpTo(maxScore) =
scoreFromZero until
predicate(scoreFromZero = maxScore) => maxScore
</ex>
<p>These event conditions may be arbitrarily complex. As a slightly more sophisticated example, suppose we want a ball to respond to the event of hitting the floor. We'll define center as the (time-varying) height of the ball's center point, and radius as the ball's radius. We will consider the ball to be hitting the floor when two conditions are true: the bottom of the ball (that is, the center height minus the radius) is not above the floor, and the ball is moving in a downward direction (that is, the rate is less than zero).
<p>In ActiveVRML, this event is expressed as follows:
<ex>hitFloor =
predicate((center - radius <= floor) and (derivative(center) < 0))
</ex>
<p>Derivatives of this event are discussed later in this document.
<note><ti>Note
<p>The parentheses in this example are not required and are included for clarity only, since the syntactic precedence of and is weaker than that of inequality operators.
</note>
<p>Alternative Events
<p>Given any two events, we can describe the event that occurs when either happens. For example, the following describes either a left mouse button being pressed or our ball hitting the floor:
<ex>LBP | hitFloor
</ex>
<p>By repeatedly using the choice operator <k>|</k>, we can include as many component events as desired in the choice. For example:
<ex>LBP | hitFloor | predicate(scoreFromZero = maxScore)
</ex>
<h3><ti>Events with Handlers
<p>Another way to build events is to introduce or enhance event data. For example, we may want an event that occurs whenever our user presses the left or right mouse button, and has value 1 if the left button is pressed and value 2 if the right button is pressed. First, we describe an event that occurs if the left button is pressed and has value 1:
<ex>LBP => 1
</ex>
<p>Then we describe a similar event based on the right button and having value 2:
<ex>RBP => 2
</ex>
<p>We then combine these two number-valued events into a single event:
<ex>buttonScore = LBP => 1 | RBP => 2
</ex>
<p>If an event already produces data, we can supply a way to transform the data into some other, more usable value. For example, we may want an event similar to buttonScore, but with values multiplied by 10. Rather than changing the definition of buttonScore, which may be needed elsewhere or may be out of our control, we make a new event by adding a multiply-by-ten event handler:
<ex>multiplyByTen(x) = 10 * x
buttonScore10 =
buttonScore => multiplyByTen
</ex>
<p>We can do the same thing without introducing the multiplyByTen definition:
<ex>buttonScore10 =
buttonScore => function (x). 10 * x
</ex>
<p>As another, simpler example of transforming event data, we may want to take a key press event and change all lowercase letters to uppercase.
<ex>keyPress => capitalize
</ex>
<note><ti>Note
<p>It is no coincidence that the notation for alternative events (e|e') and events with handlers (e=>f) is the same as introduced for reactive behaviors in the previous sections "Simple Reactivity" and "Event Data." The infix until operation used to express reactive behaviors applies to a behavior b and an event e, and yields a behavior that mimics b until the event e occurs, yielding a new behavior b', at which time the until behavior starts mimicking b'.
</note>
<h1 id=user_interaction><ti>User Interaction
<p>ActiveVRML animations are intrinsically interactive, meaning that they know how to respond to user interaction events. We have already seen examples of events based on mouse buttons. Another form of input is a key press, which is similar to a button press but includes the generated character as event data.
<p>Geometric user interaction is supported through an event where an animation is being probed. From the animation's viewpoint, the user's probe is a point-valued behavior that ActiveVRML breaks into a static point at the onset of probing and an offset vector behavior to show relative movement. These points and vectors are 2-D for probed images and 3-D for probed geometry.
<p>Because there may be any number of transformed versions of an ActiveVRML animation coexisting at any time, there is no unique relationship between an animation and any given coordinate system, such as user coordinates. Thus, animations can only make sense of user input given to them within their own local coordinates. ActiveVRML automatically converts from the user's coordinates to the animation's own local coordinates.
<p>For example, the following describes an image moving under user interaction:
<ex>
movingImage(startImage) =
let
pickableImage, pickEvent = pickable(startImage, [])
in
pickableImage until
andEvent(leftButtonPress, pickEvent) =>
function ((), (pickPoint, offset)) .
// Then make a version that moves with the offset
// (given in modeling coords)
let
moving = transformImage(translate(offset), pickableImage)
in
// Then stay with the moving image until released.
moving until
// Then snap-shot the moving image and use to restart.
snapshot(moving, leftButtonRelease) => movingImage;
</ex>
<h1 id=time_transforms><ti>Time Transforms
<p>Just as 2-D and 3-D transforms support spatial modularity in geometry and image behaviors, <e>time transforms</e> support temporal modularity for behaviors of all types.
<p>For example, suppose we have a rocking sailboat expressed as follows:
<ex>sailBoat1 = transformGeometry(rotate(zVector3, sin(time) * pi/6),
first(import("sailboat.wrl")))
</ex>
<p>If we want a slower sailboat, we could replace sin(time) with sin(time/4), However, for reusability, we want instead to describe a new sailboat in terms of sailBoat1.
<ex>sailBoat2 = timeTransform(sailBoat1, time/4)
</ex>
<p>With this technique, we could define any number of coexisting similar sailboats, each having its own rate of rocking.
<h1 id=diff_integration><ti>Differentiation and Integration
<p>Because ActiveVRML time is continuous, rather than proceeding in a series of small jumps, it makes sense to talk about the rate of change of behavior of types such as number, point, vector, and orientation. For example, suppose that moonCenter is the time-varying position of the center of the moon. The moon's 3-D velocity vector (which is also time-varying) is expressed as follows:
<ex>derivative(moonCenter)
</ex>
<p>and the moon's 3-D acceleration vector is expressed as:
<ex>derivative(derivative(moonCenter))
</ex>
<p>Conversely, it is common to know the rate of motion of an object and want to determine the position over time. Given a velocity and an initial position, we could express the position over time as:
<ex>initialPos + integral(velocity)
</ex>
<p>It is often useful to specify the rate of motion of an object in terms of its own position. Suppose we have a goal, which may be moving, and we want to describe a point that starts at some initial position and always moves toward the goal, slowing down as it gets closer to the goal. The following definition describes this behavior:
<ex>pos = initialPos + integral(goal - pos)
</ex>
<p>This definition is equivalent to saying that the value of pos at the behavior's start time is initialPos, and that its velocity is goal - pos, which is in the direction of goal, relative to pos, with a speed equal to the square of the distance between goal and pos. If, for example, pos and goal coincide, then pos will not be moving at all.
<p>Many realistic-looking physical effects can be described in this fashion, especially when the definitions are extended to use force, mass, and acceleration.
<note><ti>Note
<p>Integrals in this self-referential form are ordinary differential equations. Any number of such definitions may be expressed in a mutually recursive fashion to express systems of ordinary differential equations.
<p>Implementations should take care to decouple the step sizes used in numerical integrators from that used for frame generation. There are a variety of numerically robust and efficient techniques, some of which adapt their step sizes to the local properties of the behavior being integrated.
</note>
<h1 id=lang_int><ti>Language Integration for ActiveVRML
<p>The Active Virtual Reality Modeling Language (ActiveVRML) is designed specifically for multimedia interactive animation. Existing popular programming languages, such as Microsoft Visual Basic, Java, and C++, are designed for coding general (nonmultimedia) computations, such as file system management, network protocols, intensive numerical computation, etc. An important aspect of ActiveVRML is the ability to communicate with programs written in these existing general programming languages. This strategy results in a symbiosis, allowing each language to be used for its strengths. In this document, an overview is provided of how ActiveVRML components interoperate with components written in other applications to form a single, multilingual application.
<h2><ti>Application Connections
<p>Although conventional programming languages do not support behaviors as first-class values, most languages do support some form of event modeling. For this reason, ActiveVRML's language integration mechanism is based on events with data.
<p>There are two communication paths between ActiveVRML and imperative programming languages:
<ol>
<li>ActiveVRML-originated events. ActiveVRML can notify an imperative program that a particular event has occurred and provide data (parameters) along with this notification. In this case, the imperative program imports an event defined in ActiveVRML and defines its own response code to go with it. The response code is executed when the ActiveVRML event occurs, using the data provided by the event.
<li>Program-originated events. An imperative program can notify ActiveVRML that an event has occurred and provide data along with this notification. In this case, the imperative program creates an object with a fire method that includes parameterized event data. This object can then be passed to ActiveVRML to be incorporated into a model. The imperative program's object acts as a kind of remote control that triggers event occurrences that affect the model.
</ol>
<p>Other forms of communication can be simulated using these two primitive mechanisms. For instance, an ActiveVRML behavior can be converted into a stream of time-stamped information sent to an imperative program. To do this, the programmer needs to implement a time-constrained event that determines how often to sample and what behavior and event information to sample. The resulting ActiveVRML event, passed to the program as described above, generates time-stamped sample values. The event could be expressed in ActiveVRML natively or it could originate in the imperative program, under explicit program control.
<p>The next section develops a hypothetical application that will be used to illustrate these techniques.
<h2><ti>Example Connections: The Personal Weather Channel
<p>To further explain how language integration works between ActiveVRML and other programming languages, consider the following example of a weather server. When connected to the weather server, a high-quality image of the user's geographic area is displayed at roughly state or province resolution, annotated with place names. The user clicks a town name, causing an animated diagram to appear, superimposed on the map. The animation is one of the following:
<ul>
<li>Sunny: An animated smiling sun.
<li>Partly cloudy: Wispy, light clouds dance over the town.
<li>Overcast: Thick clouds billow and roil, mostly obscuring the town.
<li>Rain: Cartoon rain droplets fall.
<li>Snow: Richly detailed ice crystals appear and blanket the town.
</ul>
<p>As the forecast for the town changes, the animated weather display is updated automatically.
<p>The obvious language division for this application is to perform the graphics, animation, and user input functions from ActiveVRML and obtain current forecast information over the Internet from an imperative program (a weather information service). There are many possibilities in designing the communication protocol. A very simple protocol consisting of just two event messages, could be as follows:
<ol>
<li>A select-town event is linked from ActiveVRML to the program. Each time the user selects a town, the name of the town is passed as event data. The program responds by locating a forecast for that town and relaying the information with the update-forecast event. The select-town event also instructs the program to continue to track the forecast for this town, requesting an update-forecast message each time it changes.
<li>An update-forecast event is linked from the program to ActiveVRML. This event is sent in response to a select-town event and each time the program determines that the forecast has changed. The update-forecast event communicates one of the five forecasts: sunny, partly-cloudy, overcast, rainy, or snowy. In response to this event, ActiveVRML updates the weather animation to reflect the forecast.
</ol>
<p>The first part of this protocol illustrates the first form of application connection, ActiveVRML-originated events. The second part illustrates the second kind of application connection, program-originated events.
<h1 id=distrib><ti>Distribution
<p>The first version of ActiveVRML supports only single-user scenarios, but its fundamental design, however, lends itself to multi-person, distributed, shared-worlds, which we plan to support in the second version. Distribution was a goal for the design since the beginning. The modeling (vs. programming) approach that we take, is much more effective for sharing content across a network. The fact that the model is an abstract representation of the content, devoid of specific sampling rates and frame generation rates, makes it amenable to effective solutions to multi-user sharing, which we plan to pursue in the future.
<h1 id=optim><ti>Optimizations and Regulation
<p>Reactive behaviors present a model that targets ease of expression and presents a powerful mental model for thinking about media flow and interaction. However, there is a considerable difference between the reactive behavior model and the way it is implemented; i.e., the actual stream of commands that is submitted to the hardware for execution and for generating media. While the former targets ease of expression, the latter targets being supper efficient on the underlying hardware, while being faithful to the semantics of the model.
<p>Reactive behaviors enforce a carefully designed discipline that lends itself to aggressive analysis and optimization techniques, and automatic regulation. Following is a discussion of some of these techniques.
<h2><ti>Program Transformation
<p>The reactive behavior model makes no use of implicit state, and supports operations that are side-effect free. Unlike traditional approaches, an animation is not achieved via side-effecting (modifying state), rather it is achieved conceptually via computing new values from previous ones. This value based semantics lends itself to referential transparency, which permits reducing models into much simpler and more efficient forms through program transformation techniques. One example of this is constant folding, where a sub-expression with static components is reduced to a single static value to avoid its reevaluation per frame.
<h2><ti>Temporal Analysis
<p>Furthermore, the continuous time model and the fact that reactive behaviors represent a complete description of how an entity behaves in relation to time and in reaction to events, lends itself to temporal analysis of the behavior for optimization. One aspect of temporal analysis is exploiting temporal coherency, where it becomes possible to construct a frame by using incremental operations based on the previous frame, rather than generating it from scratch.
<p>Another aspect is the prediction of events before they happen. This lends itself to priming the system in order to achieve a low latency reactivity to the event. Possible techniques for prediction include analysis of rates of change and derivative bounds, and interval arithmetic. For example, when a bullet is approaching a wall, the system can predict the explosion, which allows pre-loading a corresponding texture into main memory.
<p>More often it is possible to determine that an event is not going to happen for a certain period of time, and hence stop checking for it until that time. For example, if a meteor is approaching the viewing window, and the system determines based on derivative bounds that it is not going to enter for at least five seconds, then the testing for the event will not need to happen until five seconds pass by.
<h2><ti>Regulation
<p>Automatic regulation is a necessity to insure the graceful presentation of common content on varying platforms. This is particularly important for Web content and in light of the wide variations of PCs out there. The kind of experience that we're after involves treating the user as a real time participant of the system. Therefore, the presentation of an interactive animation is a task whose correctness (like synchronization and smoothness) places time-critical needs on the delivery system.
<p>Regulation is a technique that deals with the fluctuations in load on a particular computer, and the differences in capacity between different computers and still deliver a smooth presentation of the content, albeit at varying levels of quality. Graceful degradation techniques include reducing the frame rate (straight forward given our continuous time model), reducing the spatial resolution, using geometric levels of detail, and using simpler audio filters.
<h2><ti>Traditional Approaches
<p>In traditional approaches where the interactive animation is imperative code making calls to low level media APIs, all the activities about time management, temporal optimizations, low latency reactivity to events, and regulation need to be developed and hard wired into the content itself. Therefore, the content becomes bulky and hard to construct. While this is feasible for content like games, where there is sufficient time and resources, it is not so for illustrations in Web pages. The latter need to be casually created and compact. The ActiveVRML approach is to factor all the above nontrivial mechanisms into a generic engine that resides at the client side, while the content is reduced to the essence of the interactive animation.
<p>All the arguments above also apply very strongly to shared spaces, with multi-user participation. The modeling approach in ActiveVRML for these shared spaces, with the temporal and event aspects and the disciplined use of state, makes it straight forward to instill the shared experience to different distributed clients with their own viewers and with appropriate regulation.
<h1 id=conclusion><ti>Conclusion
<p>In this document, we have briefly introduced ActiveVRML, a language for modeling interactive, multimedia animations, and have illustrated some of ActiveVRML's expressiveness through a series of simple examples. We refer the interested reader to the <e>ActiveVRML Reference Manual</e> for more details.
<h1 id=appendix><ti>Appendix A. An Extended Example
<p>In this appendix, we present a larger ActiveVRML example, namely a collection of balls bouncing around in a box.
<h2><ti>Geometry Importation
<p>The first step in our example is to import the basic geometric components&em;a ball and a box. Each geometry importation yields both a (static) geometry and two 3-D points, representing a minimum bounding box for the imported geometry.
<ex>ball, ballMin, ballMax = import("ball.wrl");
rawBox, boxMin, boxMax = import("box.wrl");
</ex>
<p>We will use the ball geometry as is, but we need to make the box mostly transparent, so the bouncing balls inside will be visible.
<ex>box = opacity3(0.2, rawBox);
</ex>
<h2><ti>One-Dimensional Bouncing
<p>It will be useful to define a one-dimensional (number-valued) bouncing behavior, parameterized by lower and upper bounds, acceleration, and initial position and velocity. This bouncing behavior will be made up of an infinite sequence of phases, punctuated by bounce events. Each phase is parameterized by a initial position and velocity for that position, which start out as the overall initial position and velocity. The first bounce during a phase ends the phase, at which time the position and velocity are captured to provide the parameters of the next phase.
<ex>bounce1(min, max, accel, pos0, vel0) =
let
// Describe one phase of behavior and transition to next, given
// starting position and velocity.
bouncePhase(newPos0, newVel0) =
let
// Start velocity at newVel0, and accelerate
vel = newVel0 + integral(accel);
// Start position at newVel0, and grow with velocity.
pos = newPos0 + integral(vel);
// Bounce event. Hits min descending or max ascending.
bounce = predicate( (pos <= min and vel < 0)
or (pos >= max and vel > 0) )
in
// Follow this position phase until a bounce. Then snapshot the
// position and the reversed, reduced velocity to get the next
// starting position and velocity, and repeat.
pos until
snapshot((pos, -.9 * vel), bounce) => bouncePhase
in
bouncePhase(pos0, vel0);
</ex>
<h2><ti>Three-Dimensional Bouncing
<p>Next we will construct a 3-D bouncing behavior by appealing to the one-dimensional bouncing behavior for each of the three dimensions.
<p>The minimum and maximum ball translations are determined from the box's and ball's minimum and maximum points, which were generated during importation. The ball's minimum allowed translation is the one that when added to the ball's minimum point puts it into contact with the box's minimum point, and similarly for the maxima. These two observations lead to the following definitions for the minimum and maximum translation vectors:
<ex>ballTranslateMin = boxMin - ballMin;
ballTranslateMax = boxMax - ballMax;
</ex>
<p>Now we can define a bouncing ball geometry behavior, which is parameterized by the initial position and velocity.6
<ex>bouncyBall(pos0: point3, vel0: vector3) =
let
// Appeal to the 1D version three times, ...
x = bounce1(xComponent(ballTranslateMin), xComponent(ballTranslateMax),
0, xComponent(pos0), xComponent(vel0))
y = bounce1(yComponent(ballTranslateMin), yComponent(ballTranslateMax),
0, yComponent(pos0), yComponent(vel0))
z = bounce1(zComponent(ballTranslateMin), zComponent(ballTranslateMax),
-9.8, zComponent(pos0), zComponent(vel0))
in
// Use the results to translate the ball.
transformGeometry(translate(x, y, z), ball)
</ex>
<p>It is a simple matter then to add a box, to get a single-ball version of our example:
<ex>bouncyModel1(pos0, vel0) =
box union bouncyBall(pos0, vel0)
</ex>
<h2><ti>Many Bouncing Balls
<p>Instead of just a single bouncing ball, we want an animation in which a user can cause any number of balls to be generated, all bouncing independently. To make this happen, we will define a second model, parameterized not by a single (pos0,vel0) pair, but rather by an event that produces (pos0,vel0) pairs, and adds a ball on each occurrence of the given event. This second model is the union of the box with a geometry composed of first no ball (the empty geometry), then one at the first occurrence of the given ball generator, then two at the second occurrence, and so forth.
<ex>bouncyModel2(ballGen) =
let
balls = emptyGeometry until
ballGen => function (pos0, vel0).
bouncyBall(pos0, vel0) union balls
in
box union balls
</ex>
<p>Here is a brief explanation of how this definition works: At first, balls is the empty geometry. When ballGen occurs, its (pos0,vel0) pair is used to generate a single new bouncing ball, together with another instance of balls, which, as before, is empty until the first occurrence of ballGen (after this new ball's start), at which time this second instance of balls becomes a new bouncing ball together with a third instance of balls, and so on.
<p>As a stylistic variation, we might factor our event processing into multiple phases: generation of (pos0,vel0), by ballGen, conversion of (pos0,vel0) into a bouncing ball, by bouncyBall, and adding the rest of the balls, by a new function, addRest.
<ex>bouncyModel2(ballGen) =
let
addRest(geom) = geom union balls
balls =
emptyGeometry until
ballGen => bouncyBall => addRest
in
box union balls
</ex>
<p>Note that the cascading effect of event data handlers. The <e S>=</e>> operation associates to the left, so the handler line above is equivalent to
<ex> (ballGen => bouncyBall) => addRest
</ex>
<p>How might we define a ball generating event, as needed by bouncyModel2? There are many possibilities, but one very simple one is to wait for a button press event and then use the time of the button press to generate a pseudo-random position and velocity,
<h2><ti>Vanishing Balls
<p>With bouncyModel2, each new ball stays around forever once it comes into being. In this next variation, we will make each ball vanish (become the empty geometry) when it is picked. All we need to do is add another intermediate phase of event handling, untilPicked, that converts the newly created, permanent ball into a temporary one just before adding to the rest of the balls.
<ex>bouncyModel3(ballGen) =
let
untilPicked(geom) =
let
probableGeo, probeEv = pickable(geom, []);
in
probableGeo until
andEvent(leftButtonPress, probeEv) => emptyGeometry;
addRest(geom) = geom union balls
balls =
emptyGeometry until
ballGen => bouncyBall => untilPicked => addRest
in
box union balls
</ex>
<h1 id=footnotes><ti>Footnotes
<ol>
<li>In programming language terms, ActiveVRML is a declarative, rather than imperative, language.
<li>Geometry and image importation produces additional information beyond the geometry and image values themselves. We are omitting these values for brevity.
<li>(In the time-honored science fiction-movie tradition of sounds in space.)
<li>An intermediate, equivalent alternative is as follows:
<ex>score(current) =
let
addPoints =
function (points). score(current+points)
in
current until
scoreds => addPoints
</ex>
<li>The event andEvent (e,e') occurs when e and e' occur simultaneously. Its event data results from pairing the data produced from these two occurrences. Event handlers will then often destructure the resulting pair into its components and subcomponents, as in this example, in which the button press occurrence always generates the trivial data-which is written ()-and the probe occurrence generates a point and vector behavior.
<li>We specify explicit parameter types in this definition to disambiguate the use of functions like xComponent, which are overloaded for 2D and 3D points and vectors.
</ol>