2016-10-27 85 views
3

我正在使用SpeakerBox應用程序作爲我的VOIP應用程序的基礎。我已經設法讓所有的東西都能正常工作,但我似乎無法擺脫音頻從麥克風到設備揚聲器的「短路」。使用CallKit在VOIP應用程序中短路音頻

換句話說,當我打電話時,我可以聽到講話者的聲音以及對方的聲音。我該如何改變這一點?

AVAudioSession設置:

AVAudioSession *sessionInstance = [AVAudioSession sharedInstance]; 

    NSError *error = nil; 
    [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's audio category"); 

    [sessionInstance setMode:AVAudioSessionModeVoiceChat error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's audio mode"); 

    NSTimeInterval bufferDuration = .005; 
    [sessionInstance setPreferredIOBufferDuration:bufferDuration error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's I/O buffer duration"); 

    [sessionInstance setPreferredSampleRate:44100 error:&error]; 
    XThrowIfError((OSStatus)error.code, "couldn't set session's preferred sample rate"); 

IO單元的設置:

- (void)setupIOUnit 
{ 
try { 
    // Create a new instance of Apple Voice Processing IO 

    AudioComponentDescription desc; 
    desc.componentType = kAudioUnitType_Output; 
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; 
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; 
    desc.componentFlags = 0; 
    desc.componentFlagsMask = 0; 

    AudioComponent comp = AudioComponentFindNext(NULL, &desc); 
    XThrowIfError(AudioComponentInstanceNew(comp, &_rioUnit), "couldn't create a new instance of Apple Voice Processing IO"); 

    // Enable input and output on Apple Voice Processing IO 
    // Input is enabled on the input scope of the input element 
    // Output is enabled on the output scope of the output element 

    UInt32 one = 1; 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &one, sizeof(one)), "could not enable input on Apple Voice Processing IO"); 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, &one, sizeof(one)), "could not enable output on Apple Voice Processing IO"); 

    // Explicitly set the input and output client formats 
    // sample rate = 44100, num channels = 1, format = 32 bit floating point 

    CAStreamBasicDescription ioFormat = CAStreamBasicDescription(44100, 1, CAStreamBasicDescription::kPCMFormatFloat32, false); 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &ioFormat, sizeof(ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioFormat, sizeof(ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); 

    // Set the MaximumFramesPerSlice property. This property is used to describe to an audio unit the maximum number 
    // of samples it will be asked to produce on any single given call to AudioUnitRender 
    UInt32 maxFramesPerSlice = 4096; 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); 

    // Get the property value back from Apple Voice Processing IO. We are going to use this value to allocate buffers accordingly 
    UInt32 propSize = sizeof(UInt32); 
    XThrowIfError(AudioUnitGetProperty(_rioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, &propSize), "couldn't get max frames per slice on Apple Voice Processing IO"); 

    // We need references to certain data in the render callback 
    // This simple struct is used to hold that information 

    cd.rioUnit = _rioUnit; 
    cd.muteAudio = &_muteAudio; 
    cd.audioChainIsBeingReconstructed = &_audioChainIsBeingReconstructed; 

    // Set the render callback on Apple Voice Processing IO 
    AURenderCallbackStruct renderCallback; 
    renderCallback.inputProc = performRender; 
    renderCallback.inputProcRefCon = NULL; 
    XThrowIfError(AudioUnitSetProperty(_rioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &renderCallback, sizeof(renderCallback)), "couldn't set render callback on Apple Voice Processing IO"); 

    // Initialize the Apple Voice Processing IO instance 
    XThrowIfError(AudioUnitInitialize(_rioUnit), "couldn't initialize Apple Voice Processing IO instance"); 
} 

catch (CAXException &e) { 
    NSLog(@"Error returned from setupIOUnit: %d: %s", (int)e.mError, e.mOperation); 
} 
catch (...) { 
    NSLog(@"Unknown error returned from setupIOUnit"); 
} 

return; 
} 

要啓動IOUnit:

NSError *error = nil; 
[[AVAudioSession sharedInstance] setActive:YES error:&error]; 
if (nil != error) NSLog(@"AVAudioSession set active (TRUE) failed with error: %@", error); 

OSStatus err = AudioOutputUnitStart(_rioUnit); 
if (err) NSLog(@"couldn't start Apple Voice Processing IO: %d", (int)err); 
return err; 

要停止IOUnit

NSError *error = nil; 
[[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]; 
if (nil != error) NSLog(@"AVAudioSession set active (FALSE) failed with error: %@", error); 

OSStatus err = AudioOutputUnitStop(_rioUnit); 
if (err) NSLog(@"couldn't stop Apple Voice Processing IO: %d", (int)err); 
return err; 

我使用PJSIP作爲我的SIP堆棧並具有Asterisk服務器。這個問題必須是客戶端,因爲我們也有一個基於Android的PJSIP實現沒有這個問題。

+0

我正在調查與我的應用程序幾乎相同的問題。如果我正確理解揚聲器中的配置,揚聲器代碼將輸入流傳輸到揚聲器。所以我沒有使用該示例代碼。我正在使用pjsua_set_no_snd_dev()和pjsua_set_snd_dev()。在我的情況下,另一方受到這個短路問題的影響。順便說一下,如果我不使用CallKit,我的植入工作正常。 –

+0

好吧,我的問題也存在於較舊的iOS版本中,另一方可以聽他自己的聲音,但我不確定這個問題在哪裏以及是什麼原因。在你的情況下,我會說,使用pjsip函數。有關pjsip和CallKit的更多詳細信息,請查看https://trac.pjsip.org/repos/ticket/1941 –

回答

2

我使用WebRTC來解決相同的問題。我終於得出結論,你不應該在AudioController.mm中設置IOUnit,而是把它留給PJSIP(在我的情況下是WebRTC)。

速戰速決如下:在ProviderDelegate.swiftdidActivate audioSession 註釋掉[self setupIOUnit];在AudioController.mm的setupAudioChain以及startAudio()

相關問題