RSX 3D Contents Interfaces Data Structures Previous Next
The RSX 3D interfaces and abstract base classes contain sets of related methods that provide the tools for rendering 3D audio. These methods let you modify data structures, set parameters and flags, and retrieve audio information.
RSX 3D contains five interfaces and two abstract base classes. Applications cannot use the abstract base class methods directly. The emitters or listeners provide the abstract base class functionality through their respective interfaces.
Successful operation of a method is indicated with the standard S_OK return code. Generic COM error codes are all prefixed with E_, while RSX 3D defined error codes are prefixed with RSXERR_. For a detailed description of each error code, refer to Section 9.2.
This is the primary interface for managing a 3D audio environment. The methods for the IRSX2 interface are:
This is the older yet still supported interface for managing a 3D audio environment. It is recommend that you use the IRSX2 interface instead to modify the audio environment. Instead of using the Create...() functions provided in the IRSX interface, use the COM function CoCreateInstance(..) to construct listeners and emitters. For more information on creating emitters and listeners. The methods for the IRSX interface are:
Registers a new file-based sound in the 3D audio environment.
HRESULT CreateCachedEmitter(lpCachedEmitterAttr, lplpCachedEmitterInterface, reserved)
Parameters
Returns
Description
Emitters define the objects in an audio environment. RSX does not limit the number of emitters you can create; only the computer's processing power and memory resources limit the number of emitters an application can handle. An audio application can have two types of emitters: cached or streaming.
A cached emitter adds a new file-based sound to the 3D audio environment and returns an IRSXCachedEmitter interface. Each cached emitter is a single file-based sound source. You can describe an emitter in a .WAV, .MID, .EXE, or .DLL file. Use the IRSXEmitter abstract base class methods to control the media behavior, position, orientation, and audio properties of an emitter and the IRSXCachedEmitter methods to handle file, memory, or application resources for an emitter.
NOTE: RSX can play only one .MID file at a time.
A streaming emitter lets your application receive real-time audio data. For more information about streaming emitters, refer to the IRSXStreamingEmitter interface.
Usage Sample
/* // Create a Cached Emitter */ RSXCACHEDEMITTERDESC rsxCE; // emitter description ZeroMemory(&rsxCE, sizeof(RSXCACHEDEMITTERDESC)); rsxCE.cbSize = sizeof(RSXCACHEDEMITTERDESC); rsxCE.dwFlags = 0; // If you want to use a wave file embedded as a resource // instead of a file replace "cppmin.wav" with "cppmin.exe RESOURCE_ID" // i.e. "cppmin.exe 15" or try this: // sprintf(aeDesc.szFilename,"cppmin.exe %d", IDR_WAVE1); // strcpy(rsxCE.szFilename,"cppmin.wav"); rsxCE.dwUser = 50; // Create the cached emitter if( SUCCEEDED (m_lpRSX->CreateCachedEmitter(&rsxCE, &m_lpCE, NULL)) && (m_lpCE) ) . . .
Registers a new listener in the 3D audio environment, and returns an interface pointer for modifying the listener's state.
HRESULT CreateDirectListener(lpDirectListenerAttr,
lplpDirectListenerInterface, reserved)
Parameters
Returns
Description
A listener defines the audio view of the scene. An audio application can have only one listener: direct or streaming.
A direct listener allows RSX 3D to output to a standard audio interface. If you have DirectSound II installed and you provide a handle to the application's main window, RSX 3D uses the DirectSound II (or later) interface. If the user of your application does not have DirectSound II installed, RSX 3D uses the Wave API. Use the IRSXListener abstract base class methods to set the position and orientation of a listener and the IRSXDirectListener methods to connect or disconnect a direct listener from an output device.
A streaming listener lets your application receive processed buffers from RSX 3D. For more information about the streaming listener, refer to the IRSXStreamingListener interface.
Usage Sample
/* // Create a Direct Listener */ RSXDIRECTLISTENERDESC rsxDL; // listener description rsxDL.cbSize = sizeof(RSXDIRECTLISTENERDESC); rsxDL.hMainWnd = AfxGetMainWnd()->GetSafeHwnd(); rsxDL.dwUser = 0; rsxDL.lpwf = NULL; // create the direct listener now if( SUCCEEDED(m_lpRSX->CreateDirectListener(&rsxDL, &m_lpDL, NULL)) && (m_lpDL) ){. . .
Registers a new streaming sound in the 3D audio environment, and returns an interface pointer for controlling the emitter.
HRESULT CreateStreamingEmitter(lpStreamingEmitterAttr, lplpStreamingEmitterInterface, reserved);
Parameters
Returns
Description
Emitters define the objects in an audio environment. RSX 3D does not limit the number of emitters you can create; only the computer's processing power and memory resources limit the number of emitters an application can handle. An audio application can have two types of emitters: cached or streaming.
A cached emitter adds a new file-based sound to the 3D audio. For more information about cached emitters, refer to the IRSXCachedEmitter interface.
A streaming emitter lets your application receive real-time audio data. Creating a streaming emitter requires specifying a buffer format and approximate size. RSX 3D uses PCM format internally. When buffers are submitted in non-PCM formats, there is a real-time conversion performed using the appropriate ACM driver.
If the ACM driver performance is not adequate, the application should handle the decompression to PCM prior to submitting each buffer. RSX 3D signals the application when it finishes using a buffer of emitter data. The application can then reuse the same buffer to resubmit another buffer of emitter data to RSX 3D. At one time, an application can submit any number of buffers to RSX 3D.
Use the IRSXEmitter abstract base class methods to control the media behavior, position, orientation, and audio properties of an emitter and the IRSXStreamingEmitter methods to specify the buffer format and approximate size.
Usage Sample
/* // CreateStreamingEmitter Sample */ RSXSTREAMINGEMITTERDESC seDesc; memset(&seDesc, 0, sizeof(RSXSTREAMINGEMITTERDESC)); seDesc.cbSize = sizeof(RSXSTREAMINGEMITTERDESC); seDesc.lpwf = new WAVEFORMATEX; if(!seDesc.lpwf) { return 0; } // fill in the waveformat with values // for 22kHz, 16-bit, mono, PCM format seDesc.lpwf->wFormatTag = WAVE_FORMAT_PCM; seDesc.lpwf->nChannels = 1; seDesc.lpwf->nSamplesPerSec = 22050; seDesc.lpwf->nAvgBytesPerSec = 44100; seDesc.lpwf->nBlockAlign = 2; seDesc.lpwf->wBitsPerSample = 16; seDesc.lpwf->cbSize = 0; // create a streaming emitter hr = m_lpRSX->CreateStreamingEmitter(&seDesc, &m_lpSE, NULL); if(SUCCEEDED(hr)) { . . .
Registers a new streaming listener in the 3D audio environment, and returns an interface pointer for modifying the listener's state.
HRESULT CreateStreamingListener(lpStreamingListenerAttr, lplpStreamingListenerInterface, reserved)
Parameters
Returns
Description
A listener defines the audio view of the scene. An audio application can have only one listener: direct or streaming.
A direct listener allows RSX 3D to output to a standard audio interface. For more information about the direct listener, refer to the IRSXDirectListener interface.
A streaming listener lets your application receive processed buffers from RSX. Creating a streaming listener requires specifying the buffer format and approximate size. The buffer format for streaming listeners must be PCM. Use the IRSXListener abstract base class methods to set the position and orientation of a listener and the IRSXStreamingListener methods to request the synchronous generation of audio output.
The listener streaming interface is synchronous; buffers are
returned on demand. This non-blocking interface simplifies the
connection to real-time devices and allows you to have explicit
control over the processor usage. In this model, you can use the
RSX 3D library as a transform filter in the context of any
real-time streaming environment. The intention is that stream
pacing is provided by the device where buffers are written.
Usage Sample
/* // Create Streaming Listener // // Fill WAVEFORMATEX structure to specify output format // Fill RSXSTREAMINGLISTENERDESC to specify listener creation settings */ m_wfx.wFormatTag = WAVE_FORMAT_PCM; m_wfx.nChannels = 2; m_wfx.nSamplesPerSec = 22050; m_wfx.nAvgBytesPerSec = 88200; m_wfx.nBlockAlign = 4; m_wfx.wBitsPerSample = 16; m_wfx.cbSize = 0; DWORD dwListenerBufferSizeInMS = 200; //Lets try 200ms buffers in //this example RSXSTREAMINGLISTENERDESC slDesc; slDesc.cbSize = sizeof(RSXSTREAMINGLISTENERDESC); slDesc.dwRequestedBufferSize = dwListenerBufferSizeInMS * m_wfx.nAvgBytesPerSec / 1000; slDesc.lpwf = &m_wfx; slDesc.dwUser = 0; if(SUCCEEDED(m_lpRSX->CreateStreamingListener(&slDesc,&m_lpSL, NULL))) { . . .
Retrieves the default characteristics of the audio environment.
HRESULT GetEnvironment(lpEnvAttr)
Parameters
Returns
Description
The environment includes the coordinate system type (either right- or left-handed), the speed of sound, and the processing budget for audio localization. RSX 3D fills the data structure with all the environment values.
Usage Sample
/* // Environment */ RSXENVIRONMENT rsxEnv; /* // Check which coordinate system is selected. */ rsxEnv.cbSize = sizeof(RSXENVIRONMENT); lpRSX->GetEnvironment(&rsxEnv); if (rsxEnv.bUseRightHand == TRUE){ //Right-hand coordinate system is being used } else { //Left-hand coordinate system is being used }
See Also
Retrieves the current reverberation characteristics of the audio environment.
HRESULT GetReverb(lpReverbModel)
Parameters
Returns
Description
GetReverb returns a pointer to the RSXEREVERBMODEL data structure. This structure contains the current reverberation model settings for audio decay time, intensity, and state (on or off). These settings let you model acoustics such as those experienced in rooms, tunnels, and halls.
Usage Sample
/* // Get Reverberation characteristics /* RSXREVERBMODEL rsxRvb; rsxRvb.cbSize = sizeof(RSXREVERBMODEL); m_lpRSX->GetReverb(&rsxRvb); . . .
See Also
Modifies the default characteristics of the audio environment.
HRESULT SetEnvironment(lpEnvAttr);
Parameters
Returns
Description
The environment includes parameters to specify the coordinate system type (either right- or left-handed). Default environment settings will be used if the lpEnvAttr pointer is NULL.
Usage Sample
/* // Environment */ RSXENVIRONMENT rsxEnv; /* // Set right-handed coordinate system. */ rsxEnv.cbSize = sizeof(RSXENVIRONMENT); rsxEnv.dwFlags = RSXENVIRONMENT_COORDINATESYSTEM; rsxEnv.bUseRightHand = TRUE; lpRSX->SetEnvironment(&rsxEnv); . . .
See Also
Specifies the reverberation characteristics for the audio environment.
HRESULT SetReverb(lpReverbModel);
Parameters
LPRSXREVERBMODEL lpReverbModel
Returns
Description
Reverberation characteristics include the reverberation state (on/off), as well as the delay time and reverberation intensity.
Usage Sample
// Turn reverberation on void CMainWindow::OnReverbOn() { RSXREVERBMODEL rsxRvb; rsxRvb.cbSize = sizeof(RSXREVERBMODEL); rsxRvb.bUseReverb = TRUE; rsxRvb.fDecayTime = 1.0f; rsxRvb.fIntensity = 0.2f; m_lpRSX->SetReverb(&rsxRvb); }
See Also
RSX 3D Contents Interfaces Data Structures Previous Next
Copyright ©1996, 1997 Intel Corporation. All rights reserved