2012-10-18 58 views
4

我有一個音頻應用程序,我需要捕捉麥克風樣本編碼成MP3與ffmpeg的如何配置使用AudioUnit.framework的框架尺寸在iOS

首先配置音頻:

 
/** 
    * We need to specifie our format on which we want to work. 
    * We use Linear PCM cause its uncompressed and we work on raw data. 
    * for more informations check. 
    * 
    * We want 16 bits, 2 bytes (short bytes) per packet/frames at 8khz 
    */ 
    AudioStreamBasicDescription audioFormat; 
    audioFormat.mSampleRate   = SAMPLE_RATE; 
    audioFormat.mFormatID   = kAudioFormatLinearPCM; 
    audioFormat.mFormatFlags  = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; 
    audioFormat.mFramesPerPacket = 1; 
    audioFormat.mChannelsPerFrame = 1; 
    audioFormat.mBitsPerChannel  = audioFormat.mChannelsPerFrame*sizeof(SInt16)*8; 
    audioFormat.mBytesPerPacket  = audioFormat.mChannelsPerFrame*sizeof(SInt16); 
    audioFormat.mBytesPerFrame  = audioFormat.mChannelsPerFrame*sizeof(SInt16); 

記錄回調是:

 
static OSStatus recordingCallback(void *inRefCon, 
            AudioUnitRenderActionFlags *ioActionFlags, 
            const AudioTimeStamp *inTimeStamp, 
            UInt32 inBusNumber, 
            UInt32 inNumberFrames, 
            AudioBufferList *ioData) 
{ 
    NSLog(@"Log record: %lu", inBusNumber); 
    NSLog(@"Log record: %lu", inNumberFrames); 
    NSLog(@"Log record: %lu", (UInt32)inTimeStamp); 

    // the data gets rendered here 
    AudioBuffer buffer; 

    // a variable where we check the status 
    OSStatus status; 

    /** 
    This is the reference to the object who owns the callback. 
    */ 
    AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon; 

    /** 
    on this point we define the number of channels, which is mono 
    for the iphone. the number of frames is usally 512 or 1024. 
    */ 
    buffer.mDataByteSize = inNumberFrames * sizeof(SInt16); // sample size 
    buffer.mNumberChannels = 1; // one channel 

    buffer.mData = malloc(inNumberFrames * sizeof(SInt16)); // buffer size 

    // we put our buffer into a bufferlist array for rendering 
    AudioBufferList bufferList; 
    bufferList.mNumberBuffers = 1; 
    bufferList.mBuffers[0] = buffer; 

    // render input and check for error 
    status = AudioUnitRender([audioProcessor audioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList); 
    [audioProcessor hasError:status:__FILE__:__LINE__]; 

    // process the bufferlist in the audio processor 
    [audioProcessor processBuffer:&bufferList]; 

    // clean up the buffer 
    free(bufferList.mBuffers[0].mData); 


    //NSLog(@"RECORD"); 
    return noErr; 
} 

隨着數據:

inBusNumber = 1

inNumberFrames = 1024

inTimeStamp = 80444304 //所有的時間相同inTimeStamp,這是奇怪的

但是,我需要的mp3編碼是1152.我該如何配置它的框架尺寸?

如果我做緩衝,這意味着延遲,但我想避免這種情況,因爲它是一個實時應用程序。如果我使用這種配置,每個緩衝區我得到垃圾尾隨樣本,1152 - 1024 = 128個不好的樣本。所有樣品都是SInt16。

+2

這並不直接解決你的問題,但你應該避免在你的渲染回調中調用Objective-C或任何阻塞函數(如'malloc'或'free')。 – sbooth

回答

2

您可以配置AudioUnit將使用的每個切片的幀數與屬性kAudioUnitProperty_MaximumFramesPerSlice。不過,我認爲在您的情況下,最好的解決方案是將傳入的音頻緩衝到環形緩衝區,然後向您的編碼器發出聲音可用的信號。由於您正在轉碼爲MP3,因此在這種情況下我不確定實時是什麼意思。

+0

我用了一個環形緩衝區,工作正常,謝謝。 – vgonisanz