Of all the human perceptions, two of the most important ones are perhaps vision and sound, for which we have developed highly specialized sensors over millions of years of evolution. The creation of a realistic virtual world therefore calls for the development of realistic 3D virtual objects and sceneries supplemented by associated sounds and audio signals. The development of 3D visual objects is of course the main domain of Java 3D. However, as in watching a movie, it is also essential to have realistic sound and audio in some applications. In this chapter, we will discuss how sound and audio can be added and supported by Java 3D. The Java 3D API provides some functionalities to add and control sound in a 3D spatialized manner. It also allows the rendering of aural characteristics for the modeling of real world, synthetic or special acoustical effects (Warren, 2006). From a programming point of view, the inclusion of sound is similar to the addition of light. Both are the results of adding nodes to the scene graph for the virtual world. The addition of a sound node can be accomplished by the abstract Sound class, under which there are three subclasses on BackgroundSound, PointSound, and ConeSound (Osawa, Asai, Takase, & Saito, 2001). Multiple sound sources, each with a reference sound file and associated methods for control and activation, can be included in the scene graph. The relevant sound will become audible whenever the scheduling bound associated with the sound node intersects the activation volume of the listener. By creating an AuralAttributes object and attaching it to a SoundScape leaf node for a certain sound in the scene graph, we can also specify the use of certain acoustical effects in the rendering of the sound. This is done through using the various methods to change important acoustic parameters in the Aura lAttributes object.
This is a subclass of the Sound class for audio and sound that are unattenuated and nonspatialized. That is, similar to ambient lighting, the sound generated will not have a specific position or direction and will be independent of where the user is in the virtual 3D world. However, unlike a background scenery, more than one BackgroundSound node can be enabled and played at the same time.
Figure 1 shows the code segment in an example for adding a background sound in our virtual 3D world. In line 5, the sound file is opened and loaded by a MediaContainer from the current directory. Alternatively, a path can be specified or the sound data can come from the Internet through a URL such as http://vlab.ee.nus.edu.sg/vlab/sound.wav.
Code segment for SoundBackgroundPanel.java
Lines 8 to 17 declare some reading and writing capabilities to the sound node created in Line 1. With these capabilities set, it is now possible to change the sound data and alter the enable, loop, release and continuous play functionality of the node through some of the methods in lines 18 to 25.
Specifically, line 18 uses the setSoundData method to change the sound source to correspond to that loaded earlier. Line 19 sets the initial amplitude gain for playing the sound, line 20 uses the setLoop method to specify number of times that sound will be repeated. In the current case, the argument is 0 and the sound will be played once. An argument of –1 will repeat the sound indefinitely.
Line 21 sets the setReleaseEnable flag to false. This flag is only valid when the sound is played once. Setting this flag to true will force the sound to be played until it finishes even in the presence of a stop request.
Similarly, line 22 set the setContinuousEnable flag to false. Setting this to true gives rise to the effect that the sound will be played continuously in a silent mode even if the node is no longer active as when it is outside the scheduling bound. A false setting will play the sound from the beginning when the audio object reenters the scheduling bound.