2016-11-21 124 views
0

我正在嘗試播放來自套接字的音頻。但音頻有更多的噪音(原始聲音不來)如何消除來自使用iOS中的音頻單元的套接字的音頻中的噪音?

我正在使用單獨的類的揚聲器,並希望過濾聲音的噪音。我的代碼是

OSStatus WNSpeakerOutputProc(void* inRefCon, AudioUnitRenderActionFlags* ioActionFlags, const AudioTimeStamp* inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList* ioData) { 

    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; 

    WNSpeakerAudioData speakerAudioData; 
    speakerAudioData.outBuffer = ioData->mBuffers[0].mData; 
    speakerAudioData.outBufferSize = ioData->mBuffers[0].mDataByteSize; 


    [[(WNSpeaker *)inRefCon dataSource] supplyAudioData:&speakerAudioData]; 
    [pool drain]; 
    return noErr; 

} 

請講一些建議 謝謝

回答

0

我想應該說你不能隨便打開一個套接字和管道直接數據轉換爲音頻渲染啓動。您將需要緩衝一些最小數量的數據以去抖數據。你使用的是TCP還是UDP?如果UDP,你如何處理丟失的數據包。
一般來說,你不想在音頻回調中做任何慢的事情。我強烈建議刪除AutoReleasePool的創建和排空。

我還建議將數據從源代碼深入複製到ioData中,而不是調用選擇器supplyAudioData。

如果您提供WNSpeaker的詳細信息會更有幫助。它應該將源數據複製到目標地址 ioData->mBuffers[0].mData,並且它應該根據inNumberFrames複製n個字節。

0

採取更詳細看看音頻單元

-(void) setupAudioUnit 
{ 
    AudioComponentDescription desc; 
    desc.componentType = kAudioUnitType_Output; 
    desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; 
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; 
    desc.componentFlags = 0; 
    desc.componentFlagsMask = 0; 

    AudioComponent comp = AudioComponentFindNext(NULL, &desc); 

    OSStatus status; 

    status = AudioComponentInstanceNew(comp, &_audioUnit); 

    if(status != noErr) 
    { 
     NSLog(@"Error creating AudioUnit instance"); 
    } 

    // Enable input and output on AURemoteIO 
    // Input is enabled on the input scope of the input element 
    // Output is enabled on the output scope of the output element 

    UInt32 one = 1; 

    status = AudioUnitSetProperty(_audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, &one, sizeof(one)); 


    if(status != noErr) 
    { 
     NSLog(@"Error enableling AudioUnit output bus"); 
    } 

    // Explicitly set the input and output client formats 
    // sample rate = 44100, num channels = 1, format = 16 bit int point 

    AudioStreamBasicDescription audioFormat = [self getAudioDescription]; 

    status = AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, &audioFormat, sizeof(audioFormat)); 

    if(status != noErr) 
    { 
     NSLog(@"Error setting audio format"); 
    } 

    AURenderCallbackStruct renderCallback; 
    renderCallback.inputProc = OutputRenderCallback; 
    renderCallback.inputProcRefCon = (__bridge void *)(self); 

    status = AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &renderCallback, sizeof(renderCallback)); 

    if(status != noErr) 
    { 
     NSLog(@"Error setting rendering callback"); 
    } 

    // Initialize the AURemoteIO instance 
    status = AudioUnitInitialize(_audioUnit); 

    if(status != noErr) 
    { 
     NSLog(@"Error initializing audio unit"); 
    } 
} 

- (AudioStreamBasicDescription)getAudioDescription { 
    AudioStreamBasicDescription audioDescription = {0}; 
    audioDescription.mFormatID   = kAudioFormatLinearPCM; 
    audioDescription.mFormatFlags  = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian; 
    audioDescription.mChannelsPerFrame = 1; 
    audioDescription.mBytesPerPacket = sizeof(SInt16)*audioDescription.mChannelsPerFrame; 
    audioDescription.mFramesPerPacket = 1; 
    audioDescription.mBytesPerFrame  = sizeof(SInt16)*audioDescription.mChannelsPerFrame; 
    audioDescription.mBitsPerChannel = 8 * sizeof(SInt16); 
    audioDescription.mSampleRate  = 8000.0; 
    return audioDescription; 
} 

我現在用的是8000的採樣率和Voice_ProcessingIO作爲ComponentSubType但回波(噪聲)的音頻的是沒有得到取消 請檢查的編譯代碼這一個。謝謝你