2011-08-11 103 views
3

今天我有很多有趣的東西,大約iOS &音頻單元,並且發現了很多有用的資源(包括SO)。我的iPhone應用程序與音頻單元播放的錄音聲音

所有首先,我很困惑的東西:真的有必要創建一個音頻圖混頻器單元記錄聲音通過應用播放?

抑或是足以玩ObjectAL(或者更簡單地說AVAudioPlayer調用)聲音和創造不客氣正確的總線上有記錄的回調單遠程IO單元

第二個,更具有編程的問題! 由於我不熟悉音頻單元的概念,我嘗試改編apple Mixer Host project以便記錄最終混音。很明顯,我試圖用Michael Tyson RemoteIO post來做到這一點。

而且我得到一個EXC_BAD_ACCESS我的回調函數:

static OSStatus recordingCallback  (void *inRefCon, 
            AudioUnitRenderActionFlags *ioActionFlags, 
            const AudioTimeStamp *inTimeStamp, 
            UInt32  inBusNumber, 
            UInt32  inNumberFrames, 
            AudioBufferList *ioData) { 

    AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list) 

    EffectState *effectState = (EffectState *)inRefCon; 
    AudioUnit rioUnit = effectState->rioUnit; 

    OSStatus status; 

    // BELOW I GET THE ERROR 
    status = AudioUnitRender(rioUnit,  
          ioActionFlags, 
          inTimeStamp, 
          inBusNumber, 
          inNumberFrames, 
          bufferList); 

    if (noErr != status) { NSLog(@"AudioUnitRender error"); return noErr;} 

    // Now, we have the samples we just read sitting in buffers in bufferList 
    //ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, bufferList); 

    return noErr; 
} 

使用回調函數我在MixerHostAudio.h

typedef struct { 
    AudioUnit rioUnit; 
    ExtAudioFileRef audioFileRef; 
} EffectState; 

沒有和創建界面之前

AudioUnit      iOUnit; 
EffectState effectState; 
AudioStreamBasicDescription  iOStreamFormat; 
... 
@property      AudioUnit   iOUnit; 
@property (readwrite)   AudioStreamBasicDescription iOStreamFormat; 

然後在工具entation文件MixerHostAudio.h

#define kOutputBus 0 
#define kInputBus 1 
... 
@synthesize iOUnit;     // the Remote IO unit 
... 
result = AUGraphNodeInfo (
          processingGraph, 
          iONode, 
          NULL, 
          &iOUnit 
          ); 

if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;} 

// Enable IO for recording 
UInt32 flag = 1; 
result = AudioUnitSetProperty(iOUnit, 
           kAudioOutputUnitProperty_EnableIO, 
           kAudioUnitScope_Input, 
           kInputBus, 
           &flag, 
           sizeof(flag)); 

if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;} 
// Describe format 
iOStreamFormat.mSampleRate   = 44100.00; 
iOStreamFormat.mFormatID   = kAudioFormatLinearPCM; 
iOStreamFormat.mFormatFlags  = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; 
iOStreamFormat.mFramesPerPacket = 1; 
iOStreamFormat.mChannelsPerFrame = 1; 
iOStreamFormat.mBitsPerChannel  = 16; 
iOStreamFormat.mBytesPerPacket  = 2; 
iOStreamFormat.mBytesPerFrame  = 2; 

// Apply format 
result = AudioUnitSetProperty(iOUnit, 
           kAudioUnitProperty_StreamFormat, 
           kAudioUnitScope_Output, 
           kInputBus, 
           &iOStreamFormat, 
           sizeof(iOStreamFormat)); 

if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;} 

result = AudioUnitSetProperty(iOUnit, 
           kAudioUnitProperty_StreamFormat, 
           kAudioUnitScope_Input, 
           kOutputBus, 
           &iOStreamFormat, 
           sizeof(iOStreamFormat)); 

if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;} 

effectState.rioUnit = iOUnit; 

// Set input callback ----> RECORDING 
AURenderCallbackStruct callbackStruct; 
callbackStruct.inputProc = recordingCallback; 
callbackStruct.inputProcRefCon = self; 
result = AudioUnitSetProperty(iOUnit, 
           kAudioOutputUnitProperty_SetInputCallback, 
           kAudioUnitScope_Global, 
           kInputBus, 
           &callbackStruct, 
           sizeof(callbackStruct)); 

if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;} 

但我不知道什麼是錯的,不知道怎麼到Digg。 注意:EffectState結構是存在的,因爲我也嘗試集成BioAudio project的能力來從緩衝區寫入文件。

和第三,我想知道是否有更容易做記錄我的iPhone應用程序(即麥克風除外)播放的聲音?

+0

它對你有用嗎?錄製音頻文件,,,,,, – Aadil

+0

是的,看看https://github.com/michaeltyson/TPAACAudioConverter – lefakir

+0

抱歉,這並不完全是我想要的,我想混合兩個音頻文件,並保存在另一個音頻文件做你有什麼想法呢? – Aadil

回答

0

自己找到。我忘了這樣連鎖:

callbackStruct.inputProcRefCon = &effectState; 

這是代碼的一部分。現在我又有了概念上的問題......