RSX 3D Contents Interfaces Data Structures Previous Next
The options described here give you maximum control over audio data, files, and parameters. These options are less commonly used than those presented in the discussion about Using RSX 3D. These options enable you to:
With audio streaming, you gain access to the audio data, but you assume responsibility for managing buffers and pacing playback. RSX 3D manages this for you when you use a direct listener or cached emitters.
The application receives processed buffers from RSX 3D through the streaming listener interface. With a streaming listener, you become responsible for providing buffer space and managing pacing for real-time play of audio data.
You may need to use streaming to gain control of the audio data for processing. Your application can receive processed buffers from RSX 3D through the streaming listener interface. You need to create either a streaming listener or a direct listener to have sound in your application. RSX 3D allows only one listener. For more information about the direct listener, see Section 4.3.
To create a a streaming emitter, create an RSXSTREAMINGLISTENER object and request a pointer to the IID_IRSXStreamingListener interface.
The IRSXStreamingListener interface provides methods that let you create a streaming listener and request buffer space. These methods are in addition to the IRSXListener abstract base class methods which let you set the position and orientation of a direct or streaming listener object. See Section 4.3.1 for more information about base class listener methods.
Table 1. Streaming Listener Methods
This method | Does the following |
RequestBuffer | Requests the synchronous generation of a buffer of audio data. |
When you set the camera position and orientation for the graphics rendering, also update the audio listener position and orientation.
RSX 3D provides streaming support for the listener and emitters. To create a streaming listener, create an RSXSTREAMINLISTENER object and request a pointer to the IID_IRSXStreamingListener interface. You need to specify a PCM buffer format and the approximate buffer size you want the streaming listener to return to your application.
During a live connection, the application is responsible for pacing the listener. This flexible implementation enables the application to easily connect the listener output with a paced output device. For more information about audio streaming and paced output devices, see Section 2.3. The following code shows how to create a streaming listener:
/* // Create a streaming listener and // save the IRSXStreamingListener interface */ RSXSTREAMINGLISTENERDESC slDesc; IRSXStreamingListener* lpSL; memset(&slDesc, 0, sizeof(RSXSTREAMINGLISTENERDESC); slDesc.cbSize = sizeof(RSXSTREAMINGLISTENERDESC); slDesc.lpwf = lpMyOutputWaveFormat; slDesc.dwRequestedBufferSize = 0; slDesc.dwUser = 0; hr = CoCreateInstance( CLSID_RSXSTREAMINGLISTENER, NULL, CLSCTX_INPROC_SERVER, IID_IRSXStreamingListener, (void ** ) &lpSL); if(SUCCEEDED(hr)) { lpSL->Initialize(&slDesc, lpUnk);
} /* // Get the buffer size used by the streaming listener for // the PCM wave format */ dwActualBufferSize = slDesc.dwActualBufferSize
When your application requires a buffer, call the IRSXStreamingListener's RequestBuffer method. The following code demonstrates retrieving a buffer from the streaming listener:
/* // Get the buffer size used by the streaming listener for // this wave format */ LPSTR lpData; lpData = HeapAlloc( myHeap, HEAP_ZERO_MEMORY, dwActualBufferSize); lpSL->RequestBuffer(lpData, NULL, 0); . . .
RSX 3D provides streaming support for the listener and emitters. You may need to use streaming to gain control of the audio data for processing before passing it to RSX 3D or for streaming data from a network. For instance, you may want to perform voice decompression and/or wave table synthesis on a per-buffer basis before submitting to a streaming emitter. See Section 2.3 for a discussion of audio streaming.
To create a a streaming emitter, create an RSXSTREAMINGEMITTER object and request a pointer to the IID_IRSXStreamingEmitter interface.
The IRSXStreamingEmitter interface provides methods that let you control buffering for real-time processing of emitter data. These methods are in addition to the IRSXEmitter abstract base class methods which let you set the position, orientation, pitch, state, model, and processing budget for a cached emitter or a streaming emitter object. See Section 4.4.1 for more information about abstract base class emitter methods.
Table 2. Streaming Emitter Methods
This method | Does the following |
Flush | Removes all submitted buffers from the emitter's playback queue. |
SubmitBuffer | Requests the synchronous generation of a buffer of audio data. |
When you set the position and orientation for graphical objects, also update each audio emitter's position and orientation.
The following code demonstrates creating a streaming emitter.
RSXSTREAMINGEMITTERDESC seDesc; IRSXStreamingEmitter* lpSE; memset(&seDesc, 0, sizeof(RSXSTREAMINGEMITTERDESC); seDesc.cbSize = sizeof(RSXSTREAMINGEMITTERDESC); seDesc.lpwf = lpMyInputWaveFormat seDesc.dwUser = (DWORD )pGraphicalObject; hr = CoCreateInstance( CLSID_RSXSTREAMINGEMITTER, NULL, CLSCTX_INPROC_SERVER, IID_IRSXStreamingEmitter, (void ** ) &lpSE);
if(SUCCEEDED(hr)) { lpSE->Initialize(&seDesc, lpUnk);
} . . .
When you want to submit a buffer to a streaming emitter, call the IRSXStreamingEmitter's SubmitBuffer method. If the buffer is not Pulse Code Modulation (PCM), RSX 3D converts it using the appropriate ACM driver. RSX 3D does not limit buffer formats to PCM; you can use any ACM resolvable format. For more information about audio streaming and paced output devices, see Section 2.3.
To control buffering, an event handle is specified in the RSXBUFFERHDR structure. This allows RSX 3D to signal the application when it finishes using a buffer of emitter data. The application can then refill the buffer and send more data to RSX 3D. This is a standard Win32 event, so any mechanism, such as WaitForSingleObject, WaitForMultipleObjects, etc., may be used to determine the state of the event. Using this approach, you can write a polling or blocking mechanism to supply buffers to the emitter. The following code demonstrates submitting a buffer to the streaming emitter:
LPRSXBUFFERHDR lpbh; LPSTR lpData; HANDLE hEventSignal; lpbh = HeapAlloc( myHeap, HEAP_ZERO_MEMORY, sizeof(RSXBUFFERHDR)); lpData = HeapAlloc( myHeap, HEAP_ZERO_MEMORY, dwBufferSize); hEventSignal = CreateEvent(NULL, FALSE, FALSE, NULL); bh->cbSize = sizeof(RSXBUFFERHDR); bh->dwSize = dwBufferSize; bh->lpData = lpData; bh->hEventSignal = hEventSignal; lpSE->SubmitBuffer(bh); . . .
Pitch is the predominant frequency sounded by an acoustical source. It is the height or depth of a tone. You can alter the pitch of an emitter by calling the IRSXEmitter interface's SetPitch method. A pitch factor of 1.0 is the default, causing no change in pitch. RSX 3D supports pitch adjustment ranges from .25 to 4.0. A pitch of .25 indicates a deep tone, whereas a pitch of 4.0 indicates a high, shrill tone.
By modifying pitch, you can create sound effects such as simulating an engine accelerating up to speed. Yet, you can use the same sound source for another engine emitter that runs at a set speed and still another engine that runs at a low speed.
NOTE. When you change the pitch for an emitter, its timing also changes in proportion to the pitch.
The following code sample uses the SetPitch method to specify the pitch adjustment of an emitter. The parameter fPitch was preset to a value between 0.25 and 4.0.
m_lpCE->SetPitch(fPitch);
Synchronizing emitters means to group and operate them simultaneously. It is sometimes necessary to synchronize the control of several cached emitters. One advantage of doing this is to ensure that a group of emitters starts, pauses, or stops at exactly the same time. Another advantage is that you can write more efficient code by applying a method to a group of cached emitters, rather than to each emitter.
To create a synchronization group, you specify a group ID when you create each cached emitter. The RSXCACHEDEMITTERDESC data structure contains a dwGroupID member, which you can use to specify the synchronization group. You can use any non-zero value to represent a synchronization group. When you call the ControlMedia method of the IRSXCachedEmitter interface, RSX 3D applies the control to all members of the synchronization group.
NOTE. For the emitters in a group to stay synchronized, make their files the same length, turn Doppler off, and set pitch to the default (1.0).
The following code demonstrates the synchronization of three cached emitters using the group ID 5.
/* // Create three emitters, specifing the same synchronization group ID */ RSXCACHEDEMITTERDESC ceDesc; IRSXCachedEmitter* lpCE_A; IRSXCachedEmitter* lpCE_B; IRSXCachedEmitter* lpCE_C; memset(&ceDesc, 0, sizeof(RSXCACHEDEMITTERDESC); ceDesc.cbSize = sizeof(RSXCACHEDEMITTERDESC); ceDesc.dwGroupID = 5 strcpy(ceDesc.szFilename,"a.wav"); if(lpRSX->CreateCachedEmitter(&ceDesc, &lpCE_A, NULL) == S_OK && lpCE_A){ strcpy(ceDesc.szFilename,"b.wav"); if(lpRSX->CreateCachedEmitter(&ceDesc, &lpCE_B, NULL) == S_OK && lpCE_B){ strcpy(ceDesc.szFilename,"c.wav"); if(lpRSX->CreateCachedEmitter(&ceDesc, &lpCE_C, NULL) == S_OK && lpCE_C){ . . . /* // Start playing all three emitters with continuous looping */ lpCE_A->ControlMedia(RSX_PLAY, 0, 0); . . .
When you destroy a cached emitter, RSX 3D removes it from the synchronization group.
When you create a cached emitter, you can specify hEventSignal to indicate that you want your application to know when the emitter stops playing. RSX 3D signals the application when playback completes. Although you can determine when play stops by polling (using the IRSXEmitter interface's QueryMediaState method), the signal method is more efficient because it only notifies the application when the event occurs.
RSX 3D provides several methods to let you achieve fine-grained control over which emitters you want to play. You can request RSX 3D to determine how audible a particular emitter is at present, and based on this information, you can choose to mute the emitter.
RSX bases the audibility factor on the emitter's static intensity times the dynamic intensity. You specify the static intensity in the RSXEMITTERMODEL data structure. RSX 3D calculates the dynamic intensity from the emitter model, using the emitter's distance to the listener.
NOTE. The audibility factor does not analyze the actual audio data. RSX 3D may not play two emitters with the same audibility factor at the same level - this depends on the level you use when you author the sound.
The process of muting/unmuting does not impact the play position of the emitter. Regardless of the emitter's mute state, time elapses for the emitter. Because RSX 3D calculates only geometry and play position, emitters in a muted state have extremely low overhead.
One possible priority scheme you may want to use for your application is to play the four most audible sounds. First, you need to determine how audible each emitter is. To do this, call the IRSXEmitter interface's QueryMediaState method and request the pfAudibleLevel parameter for each emitter in the system.
This example shows how you might construct a list containing the audible level and associated emitter, and then sort the list by audible level. You can use the IRSXEmitter abstract base class SetMuteState method to unmute the four most audible emitters and then to mute the remaining emitters.
The following code uses the RSX 3D library to demonstrate this algorithm.
/* // Examine each emitter and update its audible level */ RSXQUERYMEDIAINFO qmi; node = SceneEmitterList; while(node){ node->lpAE->QueryMediaState(&qmi); node->fAudibleLeve = qmi.fAudibleLevel; node = node->pnext; } /* while */ /* // User-Function to sort the list by audible level */ SortByAudibleLevel(SceneEmitterList); /* // Mute all but the four most audible */ node = SceneEmitterList; counter = 4; while(node){ node->lpAE->SetMuteState(counter--<= 0); } /* while */ . . .
When you create a direct listener, the audio driver it uses may no longer be available to other clients because most audio drivers do not support full duplex and/or shared operation. A common work around to this limitation is to have your application switch between input and output mode for the driver, as needed. To support this switching model, the RSX 3D library provides a disconnect/connect mechanism.
NOTE. The need to disconnect is only
applicable to the direct listener. The streaming listener does
not use an audio resource and therefore does not need this
support.
Call the IRSXDirectListener interface's Disconnect method to temporarily disconnect the listener from the audio driver, making it available for your application to open for direct recording. When you want your application to return to audio output through RSX 3D, call the Connect method.
The following code demonstrates the suspend-resume cycle:
//Audio device is used for output lpDL->Disconnect(); //Audio device is available for input lpDL->Connect(); //Audio device is again used for output
RSX 3D enables an application to specify a relative processing budget. RSX 3D uses this as a guide when it determines to what extent it must localize a specific sound. When RSX 3D is active a tray applet appears in the Windows system tray. This applet allows users of RSX 3D-enhanced applications to specify
RSX 3D Contents Interfaces Data Structures Previous Next
Copyright ©1996, 1997 Intel Corporation. All rights reserved